The Equal Protection of the Laws

Click here to read The Equal Protection of the Laws, Tussman’s seminal article on the Fourteenth Amendment which appeared in the September 1949 issue of the California Law Review and was to become one of the most widely cited law review articles ever.  It was co-written with his good friend and U.C. Berkeley colleague Jacobus (Chick) tenBroek.  TenBroek, who was blind, went on to become one of the leading figures in the disability rights movement.

A rather prescient quote from the article:

We began this essay with the suggestion that Americans have been more concerned with liberty than with equality. Alvin Johnson, in a recent article in the Yale Quarterly, goes so far as to say that the idea of equality is no part of the authentic “American Ideology”.  But whatever our past or present preferences, it is certain that a concern with equality will be increasingly thrust upon us. We have tended to identify liberty with the absence of government; we have sought it in the interstices of the law. What happens, then, when government becomes more ubiquitous? Whenever an area of activity is brought within the control or regulation of government to that extent equality supplants liberty as the dominant ideal and constitutional demand.

Academic Debris

 Report to the Regents

I was holding forth to a class of seniors, developing an eccentric interpretation of King Lear, when I noticed something strange happening to a student in the front row.  She was gazing at me, listening intently, but something was slowly oozing from her mouth, puffing out into a globe that obscured half her face. As I stared, faltering, the globe collapsed, the debris captured by a delicately questing tongue, the fixed gaze unbroken.  “Bubble gum,” I realized belatedly, as I tried to pick up the thread of thought, “just bubble gum.”  But I remember thinking, “It’s time to get out!”

    The fate that had delivered me to the brink of retirement had timed it well. The juxtaposition of Lear and bubble gum was a hint of things I would not have to get used to.  In any case, it was not a choice; I was in the last cohort that had to retire at 67 and I considered myself lucky not to have the option to continue to 70 or even beyond that, if anti-age-discrimination zealots were to have their misguided way, until the sway of tenure gave way to the judgement of the Senility Board.

     So I became Emeritus.  It was a gentle passage. A student, graduating, is thrust into a colder world, into a radically different way of life.  The Professor, retiring, may merely shed the responsibilities of teaching, drift slowly toward the quieter edges of the academic community, and continue the pattern of his life undisturbed, except, perhaps, by the disappearance of all excuses. If I were a scientist and had to relinquish scarce laboratory space the break would be more drastic, but Philosophy is one of those disciplines that one can pursue with little equipment, although “doing philosophy”—a parochial expression I’ve always detested—may require having someone to do it to.  But whatever one did in class, I had done enough of it; had, at the end, found it such a strain that after I stopped teaching I marvelled that I had ever been able to do it at all.

     And life continued. No more department meetings, of course, and senate meetings only when they promised amusement.  I kept the habit of lunching at the Faculty Club. It was, in some ways, a spiritual center of the campus, a place of gossip, of trial-balloons, of old-boy network transactions, of duels of wit, of committee meetings, of depressing “organ recitals,” a place to meet friends without appointment.  It is a faculty establishment counter-weight to Sproul Plaza and Telegraph Avenue, and although most of the faculty shun it, I liked it and felt at home there.          

      Strolling out through a sunny student-strewn Faculty Glade after a leisurely lunch  I pause before the great gnarled buckeye that clings defiantly to life.  Some wit had once painted “Let me die” across its torso, but it had outlasted the paint.  A friend had dubbed it Yggdrasill, and it reminded me of the contorted Laocoon; but it was only a sort of emeritus tree, in no hurry to depart.

     No more classes, but the mind did not seem to come to a complete stop. Now there was time to devote to the unfinished business of a long professorial life—the essay abandoned after a striking opening paragraph, projects regretfully put aside for more urgent work, lines pursued but not brought to a triumphant close, neglected fugitive inspirations.  I was shocked to discover, as I sifted through old notes, how often the great new idea that seemed to have just popped into my head a few days ago had, according to the evidence lying irrefutably before me, also popped into my head fifteen years ago.

     There was, of course, no need to do anything, but the feeling that you are not doing something you are supposed to do persists long after you have become aware of the fact that you are no longer supposed to do it.  Even the familiar nightmare persists—I am hurrying down the hall in a panic to meet the class I have forgotten to go to for several months. How could I have forgotten! Would they still be there waiting! But I never reach the door…

          I still feel I am supposed to be doing the work my classes were supposed to be interfering with.  Doing nothing,  becoming a “terminal consumer” without the dignity of function,  is waiting around to die.  So I keep working or loitering around my word-processor, polishing off unfinished business.  In a gesture to years of teaching a course in Philosophy of Law I produced a clear and simple explanation—too long for an article, too short for a book, too reasonable for zealots—of the apparent conflict between The Rule of Law and Judicial Activism.  I did it without referring to this or that legal case, or using legal jargon, or quoting from or discussing the views of current jurisprudential figures and, of course, law reviews politely declined to publish it.

     I also wrote an informal account of the experimental program I was involved with in the 60`s that seems to have relieved me of the need to do something heavier—and unwanted—on the dismal state of what we used to call liberal education. I published a short and rather unsatisfying essay on the  Religion of the Marketplace.  And I keep nibbling away at some other unconsummated inspirations.

     And something keeps haunting me.  I seem to think that I have a great undischarged debt to the University, that I owe it, or the Regents who represent it, or someone, an accounting. No one has asked for it. No one is aware that they want it. And if anyone wanted it and asked for it, they would almost surely not get it. And yet… I suffer from the fear of being ungrateful.       

     It is hard to understand the uniquely favoured life of an American university professor. Professors are not really “hired;” they are not simply employed to teach classes and write books and do research.  They are, if and when they get tenure, admitted to permanent membership in a special community, something like a secular clerical order, a community supported by the tithes of the broader community that needs, resents, and mistrusts it.  Essentially, the University is a stronghold of the mind. Professors are commissioned to think and to encourage others to think, to develop “thinking” as a way of life.  They have to do many things, but if they don’t spend their lives in thought they are impostors. How a life is spent in thought is a long story, a story easy to misunderstand since it is full of so many things that don’t look like thinking.  How little like Rodin’s statue starting up from time to time to shout “Eureka!”  Saying that the state hires people to do research and to teach the young does not quite get it right. It supports an intellectual community and its way of life—a strangely alien way of life..

     Each professorial journey of the mind leaves its own paper trail.  Annual reports list classes taught, papers delivered, grants applied for and received, manuscripts in progress, books published, committee assignments—all the signs that mark the trail of each assault on the Mt. Everest of the mind, or the less ambitious stroll toward the peak of some academic foothill up which, youthful dreams faded, we dutifully plod unless, alas, we move off the trail for a long picnic. But the record is, however detailed, strangely unrevealing of the professorial inner life, of the turmoil of the life of the mind, of the failures to achieve fruition, of what being a professor is really like. What, I wonder, do our governors, who have seldom been professors, or even professional intellectuals, know about the way of life of those over whose destiny they supposedly reign?  If glimpsed at all, half-hidden behind an administrative screen, the faculty must seem a vain, self-centered, impractical, and fundamentally ungrateful lot (now sprinkled with info and bio-millionaires)—an impression not likely to be corrected upon closer inspection. A closer inspection that, should anyone attempt it, would probably be greeted with alarm as a threat to academic freedom!

     And, if you wanted to, how would you find out?  I used to read every academic novel I could find, hoping for revealing insight into university life.  Often funny, but not, in the end, terribly rewarding.  Seldom a glimpse, beyond parody, of what teaching is really like; comic portrayals of struggles for prestige and the illusion of power; some fair detective stories with professorial sleuths and criminals; P.C. scandals; and, for want of anything more original, sex—variations on the archetypal Arthurian tale of Merlin and Vivian, the magician-pedant-sage enamoured of an apprentice enchantress—a graduate student—who exploits him, gets her degree, and leaves him entombed, enchanted or disenchanted, in a tree—a metaphor, as is now fashionable to say, for the library stacks—brooding over his foolishly unanticipated desertion. I have found the literature of academia more likely to mislead than instruct an interested outsider.

      Nevertheless, I am not going to try to write the missing novel, although I would if I could.  Nor is this a substitute for the oral history that some of my colleagues, major players of recent years, have contributed to the university archives. Nor is this a belated apologia pro vita sua since I don’t feel that guilty.  It is the sort of report I might like to read, from a long-time faculty member—unsystematic, desultory, reflective—without an urgent message, without—or even perhaps with—a last frantic grasping of the world’s indifferent lapel.

Tenure

     You cannot understand the university unless you understand tenure, but I do not propose to explain it, clarify its complexities, or even to defend it, although I certainly believe in it.  The academic community maintains itself by the selective co-option of candidates and that process has been subjected to  withering criticism: it is elitist, perpetuating its own biases, Euro-centered, fearful of a deep critique of class, gender, and cultural privilege, parading its arbitrary subjectivity as somehow transcendently objective, docilely serving the power-structure, fearful of life-giving new insight—and so on. I will not parade the defense against such generally silly charges.  Tenure, with all its faults—and its virtues overwhelm its faults—is essential to the integrity of the academic world.  Without it, a great community, however flawed, degenerates into a mere collection of employees and entrepreneurs—and temporary ones at that.

     Tenure, once achieved, frees one from the immediate control of the external community. It is difficult for a governor or a legislator or an irate citizen to reach in to fire or punish a tenured faculty member.  A professor can do many ridiculous things with impunity and that freedom from external sanction is the most obvious fruit of tenure.  In a business culture a non-business way of life—homeowners who have never met a payroll— is strangely nourished and tolerated.

       Important as that is, there are other, less appreciated, features of the tenure system.  The real point of tenure is that it protects you from your colleagues and it makes possible a long-range conception of intellectual life untroubled  by the urgencies of immediate fruitfulness.

     Take me, for example.  I have a PhD in philosophy from Berkeley and after teaching there for half a dozen years left under ambiguous circumstances. After less than a decade of teaching in the East (and having finally published a book) I returned to Berkeley as a full professor with tenure in the Philosophy department.  Apart from teaching Introduction to Philosophy I generally taught courses in political and legal philosophy.  Neither in substance nor in style was I, nor did I consider myself, a real philosopher anywhere near the difficult center of the philosophic or departmental enterprise.   

     There is a difference between “philosophy” and “philosophy of…”  “Philosophy,” philosophy pure and simple (having, in the academic world lost its suggestion of hard-won wisdom about life), concerns itself with some traditional problems like “free will,” or the problem—if it is a problem—of the external world or how we can know it or really know anything, for that matter. Or the reducibility of the mind to the brain. Or something about language or meaning. I will not produce a list of the authentic philosophical problems, but you get the idea.   Philosophy deals with them, with impressive cleverness.

     But the “philosophy of…” is a different matter.  It presupposes an immersion in something other than philosophy—in law or politics, or art, or science—and deals with problems that emerge in the course of those activities.

     This meant that after an apprenticeship studying Plato and Spinoza and Hobbes and Hume and Kant, etc., I spent most of my time pursuing studies in law and politics, not worrying—although I had put in time gnawing at those tasty bones—about the traditional philosophical problems.  Nor did I fall into the swing of philosophic fashion.  I overheard rather than participated in the arguments that swirled around Logical Positivism, or about the English school—G. E. Moore, or Wittgenstein, etc.  I sampled but could not—and cannot now— stand the pretentious cloudy pomposity of the continental philosophers—Heidegger and all that—and wrote them off with distaste—as well as the later spate of fashionable Frenchmen.

     I do not intend to try to justify this indifference to the main stream of academic philosophical activity or the consequence that I was of very little use to graduate students pursuing PhDs and jobs in the field and who had little time for things like law and politics that were, at best, marginal to professional academic philosophy.   I drifted almost entirely into undergraduate teaching:  introductory philosophy courses, dealing with some great works at a minimally technical level and, in the “philosophy of” area, developing fairly unique courses in law and politics more useful for students in political science or law or students generally than for prospective graduate students in philosophy.  I did not argue for the inclusion of the legal or political field into the core of graduate philosophy education for many reasons, including my belief that if it were included I would have to work with graduate philosophy students and would be forced to deal with the way in which the philosophy profession dealt with the field—a way that seemed to me to be more or less worthless.

     As a result, I taught very large numbers of undergraduates and virtually no graduate students.  I was happy with the arrangement, although it left me outside the main concern of the department. And I sometimes had the feeling that, from the point of view of the department, I was dead weight. Or, at best, performing a service that was departmentally necessary but uninteresting professionally.  Everyone was very polite and I felt no pressure but I was not very involved in producing philosophers and that, on the whole, was what the department cared about, what it thought it was supposed to do as a department in a graduate-school-oriented research university.

     The point is: I was safe, secure, untouchable. I had tenure. I filled a slot that could have been filled by someone more involved with what was going on in the profession.  The world was full of them—bright philosophers more than willing to come to Berkeley with the word from the East or from England or even Europe.  If, instead of tenure, I had to be rehired every few years by a decision of the department the chances are that I would not have been around very long or, alternatively, I would have had to do—had I indeed been able to—what I did not think worth doing and get into the going game.  But I went my own way, politely tolerated by the department that could do nothing about me.  Even when I served several short stints as chairman I was essentially an outsider.  I don’t think I was regarded as a real philosopher by the department. But, as I said, I didn’t have to worry about it.  However one might assess the value of my life as a professor, I was able to live it only because, in my opinion, I was shielded  by tenure from the pressure of departmental judgment.  For better or for worse.  The subtle point of tenure is that it frees you from pressure by colleagues.  Academic politics, as every insider knows,  would be intolerable without the tenure truce.  Whether the academic community is better off or more socially useful because of this arrangement is hard to prove, although I do not hesitate to say so.  At any rate, I could not have lived the life I lived without it. Tenure allowed me to pursue my own path in the areas of education and politics and law unconcerned about the judgment of departmental colleagues.  Or of anyone else.  And to do it without concern for short-term results.

A Hospitable Refuge

     But perhaps I should say something about my relation to philosophy, or the academic philosophy that has for so long provided me with a hospitable refuge and from which I feel ungratefully estranged. 

     I remember, as a teenager, reading the family Everyman edition of Spinoza’s Ethics while riding the streetcars of Milwaukee.  Why? To what avail?  When much later I took a course in Spinoza, it felt strangely familiar. As an undergraduate at the University of Wisconsin I took several philosophy courses— a standard “Introduction” that swept through Locke, Berkeley, and Hume, that I found very exciting, and something about Pragmatism that introduced me to William James, who I liked, and John Dewey who I found hard to like. I remember lugging Dewey’s Art as Experience—a heavy book—around the campus, some sort of badge I suppose, but I don’t remember what it says.  It is still on my shelf, still unmastered.  And, importantly, a class from Meiklejohn in which I finally encountered Socrates and became a lifelong Meiklejohnian. On his advice I left Madison, in 1937, and went to Berkeley to pursue graduate work in philosophy.

     I found myself quite out of my depth.  Although Madison was miles ahead of Berkeley in political sophistication, in philosophy Madison was quite parochial and not, I think, in Berkeley’s league. I had not even had an undergraduate major in philosophy and had lots of catching up to do.

     There were three groups in my philosophical world.  First, the faculty—seven intelligent men, a fairly close-knit group socially, engaging in civilized controversy.  The remorseless graduate student mind was inclined to be harsh in judgment of some of the faculty, but I respected most of them.  They had taken pride in the fact that any member could teach any course offered by the department although recently that pride had been humbled by the admission that modern logic, after Principia Mathematica, needed expert handling and could be taught by only one member.

     Second there was a small but powerful group of senior graduate students.  In those pre-war depression years there were few academic jobs, and the department allowed students to stay  on far beyond the relatively short time needed to win the Ph.D. These were, as I saw them, seasoned philosophers, more like faculty than student, but in constant contact with rawer graduate students.  They were advanced in knowledge, deeply involved in the issues pervading the philosophical world—especially England—formidable in argument, purveyors of advice, and possessed of technical lore I saw little hope of achieving.

     And then, my peer group, some very bright, some amiable drifters who, sooner or later, dropped out.  Philosophy was not one of the “hard” disciplines, but the more hard-headed among us—not including me—put their energies to work at logic, doing Principia proofs and scorning aesthetics, ethics, metaphysics and all such foggy subjects.  Those not in the logic elite seemed hapless by comparison.  We all served as teaching assistants and ambled towards our prelims after which, successfully negotiated, only a dissertation remained before we entered the slim job market.

     I drifted toward political and social subjects, without notable enthusiasm, and can’t remember any of the philosophy I studied.  I do remember that in the year before I took my exams I was interested in two things—Dostoyevsky and skiing.  And in political argument. Among my graduate student peers were a few Marxists or, as it later turned out, actual communists.  They were part of the cluster of bright students surrounding Oppenheimer.  I had left Wisconsin cured of my feeble Marxist tendencies and enjoyed a running battle with them.  Since they were very bright I could claim few victories, although I remember gloating as they disappeared for a few days to develop a response to the Hitler-Stalin pact that I and an equally non-communist friend had, by a stroke of luck, predicted.

      I took my exams, squeezed through, and early in 1941, some six months before Pearl Harbor, was drafted into the army for what was supposed to be a year’s stint. I had never held a gun in my life and was a private in the 7th Division stationed at Fort Ord when Pearl Harbor was attacked.  Shortly, I was sent to Officer Candidate School at Fort Benning, became a second lieutenant and was sent to Texas to train troops.  I can remember standing on a platform lecturing a battalion on the use of the machine gun—the only machine I’ve ever understood.  My request to transfer to the Ski Troops was denied and one day, without any initiative on my part, I was plucked out of Texas and dropped into Berkeley for a 6-month intensive course in Chinese.  When I once asked why I had been chosen I was told that it was because I was bilingual—that is, I, who am not very good at languages, had passed my French and German exams in graduate school! 

     I was sent to China and spent the rest of the war there, generally able to operate without an interpreter, involved in the effort to train Chinese troops and re-open the Burma Road.  I ended as a Major doing intelligence work and was saved from a possibly perilous assignment by the Hiroshima bomb, after which I turned down the chance to move to Shanghai and returned to Berkeley, to the wife I had married before I went to China and to the Philosophy Department.  With army-inspired discipline I polished off my dissertation, got my PhD and was launched on a teaching career.

     One of the senior philosophy graduate students I have mentioned had joined and was out to transform what was then called the Speech Department.  It offered a freshman course that was an alternative to the English Department’s course in Reading and Composition and it stressed the analysis of argument and writing.  Although I was uneasy about being part of a mere “speech” department, I accepted an invitation to join the maverick group of philosophers, historians, political scientists and psychologists running a lively enterprise teaching supreme court decisions and Platonic dialogues to undergraduates.  I enjoyed it, and there I met the powerful blind constitutional law scholar—Jacobus tenBroek—with whom I worked for a year writing an analysis of Equal Protection.  And then I moved to the Philosophy Department.

      But I still feel that I didn’t really belong there. Logicians went their own way with superb confidence, without my comprehension or cooperation. Logical Positivists were fighting a rear-guard action without my help.  Continental philosophy seemed foggy, pretentious, and hopeless, and the department had little or no use for it.  The English were riding high, but their emissaries impressed me as clever, acute, articulate, and formidable—but not important or, to me, interesting.  So I plodded along, working on political philosophy, law, and even education—out of the mainstream.  I left Berkeley to teach in the East for about eight years and returned to the philosophy department until I retired in 1982.

                   

Emeritus Exit

     So here I am, without classes to teach, without an active role in an organization, with scraps of unfinished work that really need not be finished, with a variety of regrets and guilts, with knees that limit mobility, with assorted cracks in a physical facade, with lapses of memory, with undelivered messages that no one is waiting for.  Nothing that I have to do, but still with the sense that there are some things that I ought to do.  It is hard to imagine what life would be like without the feeling that there are things that I ought to do.  In fact, the prospect of living  without “obligations”  is devastating.  A vacation is a time in which a role and its obligations are to be laid aside for a bit.  The horror of retirement is that the role is to be stepped out of for good. We know the sense of relief with which, after a vacation, we resume the burdens of the role. But after retirement?  The essence of being a human adult, of having dignity, is to have obligations.  Obligations are role-connected. Thus, retirement threatens us with de-humanization .  Which is why, I suppose, we resist giving up, relinquishing the professorial role.  

       The University is helpful.  A Professor one remains, although ambiguously “emeritus.” Retirement reveals its layers. You doff the mantle of Teacher; you may still play a part on committees.  And your research, your writing, can go on forever. Your can keep on doing your work, your “real” work, with which your other duties may have interfered.   Two decades after formal retirement I still come down to my office to commune with my word-processor.

     But there is a private drama or tragedy or comedy over which we tend to draw a protective curtain—the spectacle of the mind in disarray, losing its power, its grip on the art of sustained coherence.  I do not mean the increasing difficulty in remembering names, nor remembering what one is about to say, easily derailed if even momentarily diverted.  These, I suppose are common enough aspects of aging and while annoying are not terribly serious. More serious is the sense that one’s span of attention is shrinking.  Or perhaps the span of “control”…

     I seem to remember that, in my flourishing days, I could speak extemporaneously on something I had been thinking about—whether for a half-hour or an hour—with hardly the use of a note and with the feeling that I had a sense of the whole so that every part was under control, played its subordinate part in the composition—not too little, not too much.  The part was dominated by its place in the whole, did not get out of hand, did not try to go into business for itself, accepted the curb without a struggle.  One thing led to another easily and appropriately. I think of it, perhaps ineptly, as like the relation of the melody to the note, as if having the melody in mind determined the order and quality of the next note.  But now it is as if I cannot keep the whole thing—the theme—in mind; the parts are consequently ungoverned; instead of one thing growing out of another, things are merely added on without organic connection. The range of my control seems to shrink—from a book to an essay to a few paragraphs.  I grow less discursively coherent and become more aphoristic and heavily repetitive, dogmatic, assertive.  As if I can control a sentence but not much more beyond that.  I have, I think, lost my working sense of form.  And, with great disappointment and pain, I have finally given up on my great Milton-Hobbes project.

                 The Milton-Hobbes Project

     Ages ago, I wrote my doctoral thesis on The Political Theory of Thomas Hobbes. I still remember the relief and joy with which, the day after I passed my final oral exam, I carted loads of books back to the library with the feeling that I could now think about other things.  I had  spent a good part of a year engaged in benign interpretation, protecting the creator of Leviathan against the usual hostile treatment doled out to a defender of strong sovereignty, a treatment not entirely due to the mood of a world that had suffered the ordeal of the war against totalitarian dictatorship.  As one might expect, I found that he had been misinterpreted; that, if you read him carefully, you could see that he didn’t mean one thing or another, that he was quite a decent fellow; that, while he had some streaks of crankiness he had a grasp of something really important. I had grown quite fond of him, as had John Aubrey in whose Brief Lives Hobbes has a starring role.  Still, it was with relief that I put him aside. 

     “Put him aside” does not, of course, tell the whole story.  How much, in how many ways, he had infected my mind I could not tell then, nor even now.  Clearly, the problems of authority and subordination that he posed remained like a work-out machine in the gymnasium of the mind for the exercises of a lifetime. But subtle influences must have been at work undetected.  In those days I was a bit radical.  On the side of Parliament against the King, seeing Pym and Hampden as heroic fighters for freedom against Stuart tyranny.  Only later did I discover my taste for executive as against legislative power and replace Pym and Hampden with Wentworth and Cromwell.  I do not blame all this on Hobbes, but he may have played a part in my movement—a temporary movement now shifting into reverse—from Young Turk to Old Guard.

     In any case, Hobbes, no longer a preoccupation, remained on the scene. I resisted the advice of my departmental seniors to turn the Hobbes thesis into a book, in aid of the quest for tenure.  I did not think it was good enough as it stood and I did not want to invest more time wallowing in a work that had served me well but that would have been a diversion from my growing interest in law. And I think I was reluctant to have my life shaped by “accident.”  I remember leaving the army with more command of Chinese than of any other language save English and considering pushing into studies of classical Chinese and Chinese philosophy—as the post-war generation of Japanese experts had done with their wartime acquired knowledge of Japanese.  I was tempted, but I decided not to let my life be shaped by the accident of an Army assignment—little realizing that life is shaped almost entirely by such accidents.  I felt the same way about Hobbes, who happened to be around to write a thesis about. I was looking for a subject I could do with dispatch—a great work I could comment on, in the field of political philosophy, by a significant but somewhat neglected figure. I punched in the specifications and out popped Hobbes and Leviathan.  But much as I liked and appreciated Hobbes I did not want to make a career as a Hobbesist.  Hobbes scholarship has become a minor industry since those days and I do not regret not being a part of the boom.  I value Hobbes entirely for his political writings, and while he is full of wit and has a great capacity to turn a phrase, in the end he has one great idea—peace requires universal submission to a common authority, the difficult acceptance of subordination, the restraining of pride.  This is a theme especially appropriate in the teaching of rebellious youth and I made heavy use of him over many years. He remained in the background of my mind. I used him when appropriate but I did no further “work” on him.

      About Paradise Lost.  I had often tried to read it, but I bogged down—as do most people who try—after the first two books that scintillated with the great debates in Hell. But in the late 60’s it became a central text in the educational program I was involved with.  In that context I worked my way through it and, to my surprise, over the following years, fell in love with it.  It was, perhaps, a belated enchantment.  Paradise Lost was not in fashion.  T. S Eliot, an undeniably significant poet but otherwise something of a fool, had launched a telling attack on Milton that I had heard of but ignored.  I kept reading Paradise Lost.

     It must have been in the early or middle 80’s that it dawned on me that I was involved with two great 17th century works, an epic and a treatise, by two great figures for whom the Civil War was a crucial experience; both preoccupied with authority and rebellion—one essentially secular, the other significantly religious.  I cannot remember the crucial moment in which I was overwhelmed by the idea that I was to do a comparative study of Leviathan and Paradise Lost.

      It is odd that such a study was, and is, still to be done. For over three centuries Hobbes and Milton have loomed as landmarks of Western civilization.  Contemporaries, Englishmen, both famous in their lifetimes, who seem, so far as I can determine, never to have met.  Their intellectual interaction, before the age of footnotes, is undocumented, unrecorded.  To this day, serious comparative studies are virtually non-existent. Perhaps because students and lovers of poetry are not much interested in political theory and do not grapple with Leviathan, and students of politics and political theory have little time for Paradise Lost.  In any case, a major comparative study had yet to be done, and I was overwhelmed by the desire to do it.

     What I intended, or think I intended, was a comparative study of Leviathan and Paradise Lost, not of Hobbes and Milton; of two great works, not of two interesting authors.  And if I were pressed to say what I meant by “comparative,” I don’t think I could come up with a very good answer.  How do you compare an epic and a treatise?  Compare the ideas, ignoring the poetry? What did they think about rebellion?  An almost guaranteed descent into pedantry.  What was the point of doing that?  Unfortunately, I did not think this through.  Here were two great works; I had a deep interest in each; and I would, somehow, write about them together.

     Alas! I hate to admit it, but I think it was, in spite of its attractiveness, a bad idea.  For something like a decade and a half I struggled with it, intermittently.  Sometimes, in spite of my desire to deal with the two works as self-contained masterpieces,  I would immerse myself in the world of 17th Century England, or I would read widely in the literature about Milton and try to read Milton’s prose works—works that I found, I confess, almost unreadable.  I defy anyone to enjoy Milton’s prose. But I read and re-read and re-read Paradise Lost. I re-read Leviathan, of course, although on Hobbes I relied on my memory of my graduate-school Hobbes period.

     I had some fun playing detective. Had they ever met?  I ran down every clue I could think of with no luck.  Could I prove that Milton had read some Hobbes?  No, although at some points the Miltonic language has a Hobbesian ring.  Could Hobbes, loving poetry and translating the Odyssey and Iliad into English in his old age, have not managed to read Paradise Lost?  Not a clue!  Pursuing such matters I almost felt like a scholar and enjoyed the illusion.

     And, of course, I wrote.  Mostly about Paradise Lost, tacking on a bit about Leviathan, almost as an afterthought.  First one organizing principle and then another, riding one inspiration after another. Pages, drafts, piled up—several hundred pages.  Some not bad, lots uninspired. It did not add up; it did not hang together.  I would give up and then, months later, glance at some of it and think it not bad and worth going on with or trying again.  And then, in a few months, abandoning it with a sigh of relief mingled with guilt.  Until now.  I can’t do it and will stop trying!

     Or at least I think so. I have put, altogether, years of work on the project, an enormous investment of time and energy.  Giving up on it is not easy.  It is to acknowledge a dismal failure. It is not that I have so little time.  It is that the very idea that seems still to be so glowingly significant as I consider it abstractly, begins to fade as I work on it.  And now it dawns on me that it may not be the idea that fades but I who flag, fall short, can’t think it through clearly or, if I see some light, can’t seem to write it out.  What I write or have written seems, when I look at it, flat, shallow, uninspired, and I refuse to, I will not, write a dull book about Paradise Lost.  But it is hard to quit.  As I turn my back on the project I feel sad and depressed.  But at the thought of plunging once more into the task my heart sinks.  If I try again I will never leave off; it will be the last thing I ever do and it is not that good an idea and not what I want to die struggling with.  I leave it as a great unwritten book.

       But even as I say this I feel a familiar stir of excitement, a surge of interest, a flick of determination to do it after all. But I have felt all that before and have been lured, more than once, back into the inviting labyrinth.  No! No! No!

                   Can’t I Get Over the 60s?

     It is now 1996, but I am often reminded of the fact—or at least the charge—that I am a case of arrested development, have not gotten over the traumas of the 1960’s.  It is probably true, and in my defensive moments, a bit grandiosely, I imagine someone saying to Edmund Burke, “Surely, Mr. Burke, isn’t it time for you to get over the French Revolution?”

     I grant the disproportion—I do not confuse myself  with Burke; the student uprising of the 60’s did not treat us to a reign of terror, although we veterans recall a whiff of tear gas and the occupation of the campus by police and the National Guard.  Lots of turmoil, Jacobins at the microphone, the interruption of some habitual rituals of academic life and, beyond the campus, some political consequences—like the presidency of Nixon instead of that of Hubert Humphrey—but on the whole the system rolled with the punches without suffering cataclysmic change.                 

     And even on campus, scars healed. Student energy directed at educational reform exhausted itself in the tendency to abolish “requirements” in the name of freedom until it gathered itself  to push for a new class of requirements—like ethnic or gender studies—pushing the claims of the margin against the center.  But these innovations, while making some claims on budgets and, along with affirmative action (not really a 60’s phenomenon), bringing some under-represented groups into academic life, still left the heart of the traditional academy largely unimpaired. Or at least, left traditional academic practice an option for those who sought it. With, perhaps, some grumbling about reduced budgets, more red-tape, and (sotto voce) more people around—ignorable but still irritating to purists—who didn’t really belong there. But even in the largely lily-white days I remember my faculty elders saying that they found it worthwhile addressing themselves only to the top ten percent of the barbaric or philistine student body (what were those blond crew-cut be-sweatered be-lettered clods doing in an institution of higher learning!) and I now found myself maliciously consoling unhappy academic colleagues that it should make no difference to them what a large part of the student body, that they ignored anyway, looked or sounded like.

     So, with the end of Viet Nam (and can we number those who have never gotten over that?) and the end of the cold war enough had changed in the world to reduce the memory or experience of the student revolt of the 60’s—our current students were not even  born then!—to a small blip on the receding historical horizon.  Even among the old Faculty Club veterans with whom I regularly consort the memory of those days is tinted with the wry humor that accompanies the sight of shrinking mountains. Even the bad old days are now the good old days; the pain is gone. That was more than a quarter of a century ago.  Forget it!

     So what does it mean to say that I have never gotten over the 60’s?  And especially since, now that I think of it, the charge is fundamentally correct.  Not in the sense that I think about it all the time, or that I talk about it a lot, but that it has colored or shaped my awareness to a degree that astonishes me. How?  About what?

     To begin with, let me say why I, at least, might properly be influenced in a lasting way by the 60’s—as contrasted with many of my university colleagues.  If you are a chemist or a physicist or any sort of hard scientist or engineer your basic work was relatively untouched by the “issues” or spirit of student activism.  But if you were a social scientist or a humanist, the challenge to the society or social structure embodied in the revolt of the 60’s touched you and your work more troublingly.  The attack elsewhere on Jewish physics or on capitalist genetics —Mendelism-Morganism—was taken by us as a reductio ad absurdum of ideology, not taken seriously, not requiring noticing and answering.  But the social studies and the humanities seemed more vulnerable to the charge that they were less rooted in the study of something real or “objectively out there,” but rather projected the merely subjective categories of conventional, biased, and even oppressive social or cultural interests; that   education ought to free the mind from subjection to these received habits of perception, not to warp the innocent mind into their shape.  This challenge to the legitimacy of social and humanistic studies placed a burden on the inhabitants of these academic domains not borne by the rest of the university, and made them more likely to remember the 60’s as not only a time of external troubles but as a challenge to the very basis of their professional lives.  So I would expect students of society to be more deeply affected by the 60’s than, let us say, our chemists or engineers.

     But for some—and I will now speak only of myself—there is more.  Education, it may be necessary to explain, is essentially “initiation.”  It is the process of initiating a child or a novice or a candidate into an ongoing social process. Learning the language, your mother tongue, is being brought into participation in a particular, existing, on-going social practice.  Professional or vocational education begins with the initiation of apprentices; most education is initiation into, broadly speaking, vocational life.  And liberal education, no exception to this rule, is initiation into the great political governing vocation. As I was, in one context or another, a teacher of introductory political philosophy, I was fated or doomed to encounter, head on, the generational revolt of the 60’s.  Initiation into what?  Into this awful society?

     I hasten to observe what is easily and usually overlooked. Initiation is not merely “acceptance;” it involves both appreciation and the essential and constructive art of criticism. Appreciation and criticism are two sides of the same coin.  I say this in the hope—a real hope that I am fairly sure is vain—of fending off the  charge that initiation is merely indoctrination and that it involves the dulling of the mind that would otherwise dare to be critical.  This is an ignorant view that fancies itself as sophisticated, sceptical, and even cynical. But if you want to play with it a bit, consider if we can initiate anyone into the musical arts—or turn out a music critic—without encouraging a love and appreciation of music, without which the criticism of music is silly nonsense.  The same, believe it or not, goes for the great art of governing, of politics.

      So the problem I had to confront in the course of teaching was that of finding the basis, theoretical and practical, of the initiation of the new, the maturing generation into the political–and I use this term broadly, and aware of the negative freight it carries—life of the society; as the bearers, the continuers, the inheritors of a great culture that seems to have been displaying its deep flaws to those to whom it was extending the invitation to join.

     I will not repeat here what I have written about the experimental program that lived from 1964-1969 on the Berkeley Campus.  This, of course, did mark me deeply.  I cannot and do not want to forget it.  I have not “gotten over it” and that admission is enough to justify the charge that I have not gotten over the 60’s.  

      But the point is that for some of us, working at a particular educational station, it was not just a time of disturbance, like an earthquake, a merely external shaking. It was a deep and inescapable challenge not only to the performance of the teaching function but to the very conception of the legitimacy of the function itself. The effect on me was to make me sensitive—or oversensitive—to the profound, inescapable, universal problem of initiation, of the generational succession, sometimes seen as generational conflict and even generational war.  In short, the educational vocation—at every level—is doomed by the very nature of initiation to be deeply involved with the traumas of the generational succession.  And nothing, in my lifetime,   exemplified that confrontation more starkly than the campus turmoil of the 60’s.

     I am not interested here in expounding on the concrete experiences of university life in those days, although it taught its lessons.  I am interested in examining the more subtle effects on the structure of my awareness.  As I have suggested, it has a great deal to do with the idea, the reality, of that fascinating fact of life—the generational succession. And I am surprised, now that I think of it, at how it seems to have taken over a good bit of my mind.

     To begin with, there is an awareness of the precarious  fragility of continuity.  Any persisting human enterprise or institution rests on the establishment of habit, and the inertia of habit is enormous. Conservatism lives on the strength of the habitual, and any attempt to change a way of life is challenged, if not baffled, by the strength of the customary way of doing things.  The forces of continuity are such that it may seem odd to speak of its fragility.  And yet, if habit is powerful it is still necessary to constantly re-establish the habit.  Skip a generation of habituation and the chain of continuity is broken, often beyond repair. “Carrying on” is dependent on the recruitment of successors, and when and if that fails something is ended. So the problem of succession, of the generational succession, is inescapable, vital.  And it takes many forms.

     Consider the relation of the creator to the inheritor.  It finds paradigmatic expression in Milton’s Paradise Lost—to say nothing of Genesis itself.  A Creator has wrested a domain from chaos, has organized it, established—or tried to establish—an order, presumably embodying a conception or plan that, in the mind of the creator, is good.  It is a considerable achievement, a successful battle against chaos, a triumphant imposition of form, a marshalling of means for the sake of an end, a conquest over recalcitrance and inertia and centrifugal tendencies. The establishment of a purposive order.

     But the immediate problem is continuity, the carrying on of the enterprise, the fulfilling of the intentions of the creator. So the creator must find heirs, successors who will lend their energies to the carrying out of a conception—someone else’s conception.  The heir needs a special set of virtues, not identical with those of the creator.  Meekness suddenly appears—only the meek can really inherit.  Fidelity, submission to a vision other than one’s own.  Loyalty, respect, even gratitude…unprodigal daughters.    

     The conflict or tension between the creator and the  inheritor is a theme with many variations.  It plays itself out in every family that grapples with parental plans and the children’s’ struggle for independence; in every institution struggling to keep alive a dimming vision as it turns its destiny over to recruits with deviating aspirations of their own; in every constitutional order bowing to the authorizing vision of the founders while seeking to adjust to a new order of things.  Everywhere we look we see founders and successors, creators and inheritors, oldsters and youngsters caught in the precarious act of handing over, to carry on or to break away, to write the next chapter or to start a new book.  Since we are mortal and inter-dependent we can never escape the great trauma of the generational succession although, of course, we may grow insensitive to its pervasive presence.

     To not have gotten over the 60’s means, I think, that you see this drama playing itself out everywhere.  It does, of course, but that doesn’t mean you have to keep harping on it. But for people like me—bruised by the 60s—harping on it is almost a way of life.   

     If there was anything characteristic of the 60’s it was the proud claim of the student generation that it was indeed a self-generated generation, immaculately conceived, not really, in its search for identity, the children of the American middle class, ungenerated by them. A generation trying to break with its generators, denying any debts, rejecting any heritage.  And, even in some sort of defeat, remembering that at least it had engaged the established power “in dubious battle on the plains of Heaven [near the Chicago convention grounds] and shook the throne.” 

     But that is not the worst of it.  Friends still shake their heads sadly over my sour refusal to see the Cordelias of the world—those naive flower children—as heroines.  And why do I see Creon’s problem more forgivingly than Antigone’s? Or smile wryly at the sight of Creon’s idealistic son, in an amusing role-reversal, advising his father to bend and compromise?  Or see the central trauma of Moses’ life as his setback in enshrining the Tablets of the Law by the seductive attraction of the Golden Calf—that desert Woodstock—and the decision to skip a generation in the hope of finding people worthy of a promised land?  Or, much farther afield, insist that education cannot treat students as customers whose desires are to be satisfied but as apprentices to be initiated into ongoing enterprises in a generational succession?

     All this, and much, much more, floods to mind when I consider the possible truth of the charge.  Every great social crisis teaches its own lessons, expresses one of the great recurring facts of life.  To forget or “get over” is to experience in vain.   Some of us—a dying generation—can never get over the experience of Munich and appeasement;  some can never get over Viet Nam; and some can never get over the great turmoil of the generational succession of the 60’s.  To be unscarred, unmarked, unshaped by what we live through is the mark of a dulled consciousness, untaught by the curriculum of life. I could, I suppose, protest that I have learned other lessons, that the 60’s experience takes only its due place in my mind, that I am not distorted but only appreciative.  But I won’t bother.  O.K.  I never got over the 60s…

Or the Thirties?

     A friend, listening to my babbling about the sixties, remarked, with a patient shake of the head, “But have you realized that you have never gotten over the Thirties?”  I had not realized it; it had never occurred to me; was it true?

     The 30’s—what was there to get over?  College from 1932 to 41.  On a shoestring, waiting on tables, and then as a teaching assistant, always economically strapped.   My memories of my college days—undergraduate and graduate school alike—are not happy ones, but not because of my economic situation, a not uncommon one in those days.  But still, it seems a time of sullen discontent, really the most miserable time of my life. Why?

     Clearly not because of material circumstances.  I was living the usual two-track life of a healthy college student, inhabiting, like an amphibian, the world of sex and the world of ideas.  Each had its traumas and its joys, nothing was unalloyed. But it was the realm of ideas that I recall as the domain of discontent.  What was the world coming to?  And, what was I to do?  What happened then that I never got over?

     What was I to do?  Although a major preoccupation, connected with the state of the world, it seems in retrospect less a crisis of decision than a persisting overhanging gloom.  I went through an array of familiar temptations in the vocational desert of college kids without capital.  Although stirred by injustice, I would not go to law school, the goal of my argumentative friends. Nor to medical school, the goal of instinctive healers almost as difficult to reach, for Jewish students then, as the lovely sleeping maiden surrounded by a wall of fire.   In my junior year, in the absence of anything compelling and no doubt inspired by my father’s diffidently expressed hope that I might become an agronomist and go to Israel—still in those days Eretz Isroel or Palestine—I actually transferred to the Ag School and for a whole year trudged out to the far end of the Madison campus to take courses like Animal Husbandry. I still remember the sketched outline of a Percheron—not unlike the horses pulling brewery wagons in Milwaukee—with arrows pointing, among other things, to something labelled “fetlock.”  I stuck it out for a year, surrounded by farm boys who knew, as I did not, the difference between wheat and barley and had experienced crop-rotation.  I had hoped it would offer an escape from the world of business and competition.  I saw myself striding over ploughed furrows in fresh mornings wresting an honorable living from the soil, exploiting no one.  But after a year I returned to more familiar depressing haunts and ended by majoring in labor economics, although no longer with the illusion that it was my destiny to lead the insensate labor movement of Samuel Gompers and Bill Green into its proper class consciousness.

     I made one effort to escape from the gray vista stretching out before me, and it ended in comedy. On an impulse in my senior year I entered the Rhodes scholarship competition and although my academic record was a shameful mixture of grades my verbal antics overwhelmed the screening committee and I headed the list of candidates from the University. To a poor boy from Milwaukee the thought of going to Oxford was like being Dorothy transported by a gale to the magic land of Oz. I wallowed blissfully in the dream for a few weeks before I was brought back to reality by what I think of as “the fiasco of the purple suit.” 

     A year earlier, a friend of mine had actually won the scholarship, but he was a serious and accomplished scholar and had been carefully groomed by supportive faculty sponsors. What stuck in my mind was that he had been carefully outfitted in a tweed jacket with—this was stressed—extra-long sleeves.  The day before the contest my supportive parents sent me to a good clothing store to buy a suit.  The impressive clerk coolly waved aside my feeble murmur about something loose in tweed and led me to a rich dark blue suit—a study in quiet dignity fit for the English aristocracy. Oh for a level playing field!  He sold a dozen suits a week; I bought one in five years. It was no contest. I was to pick up the suit in the morning and wear it to the University Club.  But outside the charismatic presence of the salesman-enchanter, in the clear light of the next morning I beheld the truth.  Sharp, double-breasted, ill-fitting, sleeves too short, shoulders only a bit short of Zoot-suit and—the crowning glory—not really a rich subdued blue but rather a  garish purple.  No doubt other factors came into play, but I have always attributed the debacle—my descent into cloddish demoralized inarticulateness—to my awareness of the purple suit, as fatal to my hopes as the shirt of Nessus.

     But there is a difference between “getting over” and growing up or moving on.  And if I have not gotten over the 30’s it is not because of great personal traumas.  What could have happened to me?  All my life I was either a student or a teacher, except for a period of four or five years of army life that separated  the two conditions.  I look back on the army years with more affection than upon my years as a graduate student.  My only immersion in a non-academic culture.  I remember the great moment when I turned in my civilian clothes for regular army draftee clothing—stripped naked for a new life, a new beginning—like Joseph, indeed, emerging without his shirt of many colors from a hole in the ground.  But I bear no scars, and there is nothing I want or need to get over that great episode.

     But there is, I suppose, something thirtyish I never got over. It can be summed up in a single word: Munich.  And since my political state of mind owes much to Munich and the 60’s I should sketch what “Munich” signifies.  The domestic politics of the 30’s was about what to do about the Great Depression.  For me, in those days, Norman Thomas, a civilized democratic socialist, seemed to be right.  But I voted for Roosevelt, although I regretted that he was merely an improviser without—alas!—a clear guiding ideology.  Still, the New Deal and all that was better than the proposals of those to the left of Thomas who flirted—absurdly, I thought—with revolutionary hopes.

      But foreign policy was claiming my attention. In those days we were as yet undeceived by Orwell and Hemingway and rooted for friends in the Lincoln Brigade, watching in horror the looming triumph of Franco.  And there was Mussolini.  But dwarfing everything was the challenge of Hitler.  

     In spite of the fact that I am Jewish—deeply although secular—and was perturbed about Hitler, I was, although far from neutral as between Germany and England (or rather, the British Empire), I was, in the polemical categories of the day, something of an isolationist.  My heart was not really in it, but there I stood.  In the lull between the two world wars I had grown up with, absorbed, the bitter anti-war sentiments of war heroes like Sigfried Sassoon, had felt horror at the shameful and futile butchery of trench warfare, the stupidity of Generals, the dubious legitimacy of the cause of the Allies—empires struggling against challenge—and, as the crowning fact, the injustice of the treaty the victors imposed upon Germany, and consequently our culpability for the despair of Germany and the “understandable” appeal of Hitler.  Who could read Keynes on the Big Four, on the pervasive venality and stupidity, his suavely vicious attack on Woodrow Wilson, without coming to the view of a wronged Germany victimized by France and England aided by a hapless and naive America.  This sense of a wronged Germany, reinforced by the desire to refrain from being lured into once again pulling the chestnuts of Empire out of the fire for the British, cooled the zeal to intervene against the ominous German threat.  So there they were:  a rotten British Empire, a wronged and raging Germany, a Soviet Union, the kidnapper of a utopian hope, now bogged down in brutal tyranny.

     Nevertheless, for me, the great evil was Hitler, although I don’t think the horror of what was to come was seriously realized.  The prospect of defeating him seemed hopeless—all those tanks and planes and goose-stepping battalions.  We would have to do it.  We—not the hopeless French or the feeble English or the purge-riddled Russians. Lindbergh was telling us how irresistible the Germans were.  Why get into that mess (except that Hitler…)  That I would probably be drafted no doubt added an element of opposition to intervention, but I really don’t think, although I may be wrong, that that played the major part in determining my position.  I was for staying out (but I couldn’t get Hitler out of my mind).  I studied and knew more than my friends about the Neutrality Legislation that was the contribution of William Jennings Bryan, Wilson’s Secretary of State, to our foreign policy.  I pounced on every sign that Roosevelt was trying to get around it; I was convinced that he was trying to get around it, and my indignation about that overwhelmed any attempt at judging whether the policy of intervention was wiser than the neutrality or non-intervention or isolationism proclaimed by the, to me, unsavory elements rallying around “America First!”

     It is odd to realize that my intellectual and moral quandaries were not resolved by the triumphant work of the mind but by external events. I was drafted into the army before Pearl Harbor, for what was thought of as a year’s interlude. I didn’t really mind, since it only interrupted my graduate studies in philosophy, studies that I found quite unsatisfying.  Six months later came Pearl Harbor and the rather surprising declaration of war on us by Germany, so we were in it and all doubts were transcended.  From that moment I had no doubts about the rightness of our cause or what we—America—had to do—a happy condition that lasted while I was in the Army to the end of the war.

     So not to have gotten over the 30’s means for me that I have retained to this day the beliefs that, in internal affairs, the government should intervene in altering the habitual institutional structure on an ad hoc, tentative basis, curbing or guiding economic activity in the name of the public good; and that internationally, we should avoid appeasing irrational tyrants and be prepared to use force, if necessary.  During the cold war I was, predictably, a believer in “containment”. But with the end of the cold war I parted company with allies who, to my annoyance—to my disgust even—embraced the marketplace view of life and, in extreme cases, discovered in government the great enemy of the people.   In the struggle that seems to be shaping up between politics and economics, between government and corporation, I am generally on the side of government or, more accurately, I remain a stubborn believer in the great art of politics that, with all its corruptions, still represents the cooperative habits of human society, the rites of the public realm, the Quixotic attempt to transform mere power into legitimate authority.  I do not believe in salvation through privatization and competition.   I have not gotten over the Thirties.  I have not gotten over the Sixties.  And I have no desire to.

Defending a Myth

        What strikes me, looking back, is my odd resistance to enticing forms of insanity.  In the thirties I read Marx and the Marxists, of course, but my inoculation as a student of Selig Perlman kept me forever from embracing the faith in salvation by communist revolution.  I was always an anti-communist, although of the calmer sort who was not an ex-communist. (I do not consider the feverish confessions of betrayed lovers as revelations of the truth. Whittaker Chambers and others—I loved her but she turned out to be a whore!)

     In the sixties, resistance to the seductions of the New Left presented no problems at all.  I considered my San Diego colleague, Marcuse, a pretentious and destructive fool.  On campus “the movement” played out scenes from Dostoyevsky’s  Possessed, activists seemingly unaware that beneath the despised conventional political crust throbbed the impatient hunger of a brutal “right” and that, if it came to force, the wrong people would win. I did not take the New Left seriously and at the height of the campus turmoil occupied myself with an experiment in liberal education that was both educationally radical and spiritually conservative.

     But while I avoided infatuation with the dogmas of the revolutionary left, I confess to a lifelong prejudice that might be regarded as leftish, although I do not regard it so:  I am an anti-capitalist.  I do not mind the marketplace when it knows its place in the marketplace.  I detest its elevation into the great paradigm of social life.  The smugly perverse conception of the mind as a marketplace of ideas infuriates me. The notion that the global marketplace is, finally, the fulfilling of the human heart’s desire, transcending mere politics, is the monstrosity   of the stockbroker’s mind writ large. I do not want the profit motive to dominate the world. I take no comfort in the promise of a business civilization.  But I do not mean to develop this theme here.  I mention it only to stress the fact that in rejecting the seduction of the left I found no comfort—as the New Right found—in the inviting embrace of Capitalism.

     So what was left?  If, for the moment, we consider politics as merely a procedural framework that other forces seek to dominate, what offers itself as an alternative to economic interest or the economic class struggle?  There is religion, once thought safely dead or slumbering, now stirring itself, at home and abroad, into action in militant forms to the surprise of those who thought that the “enlightenment project” had settled all that.  And there is race or ethnicity, or whatever we should call it, emerging with surprising power, unexpectedly, considered an archaic prejudice that really has no business on the modern or post-modern or post-post-modern scene.  Religion and race—can you believe it?—still making impertinent claims on the brink of the new millennium, challenging triumphant materialism and economic determinism.

      No comfort for me there although, I confess, I welcomed the assertion of forces that could challenge the almost taken-for-granted supremacy of the economic.  No comfort, because I could not, unlike Ishmael, nail my little banner to those masts.  If I were religious, I would go to a temple or a synagogue, but I do not consider “Jewish” a purely religious category and, although I hesitate to say so, I do not consider myself very religious or even religious at all.  As for ethnic identification—and it is obviously in this sense, with all its fuzzy borderlines, that I consider myself and am a Jew—I do not regard ethnic consciousness or identification as an irrational barbaric habit to be discarded when we reach the heights of pure individualism, standing uncategorized as an untainted self.  I have a rather apprehensive respect for ethnicity, for the warmth of ethnic identification and association, for its adamacy in the face of the old Stoic cosmopolitan ideal of unmitigated humanity.  It presents deep problems, as affirmative action reveals.  We want to transcend something that is undeniably there, something that is not without its value, but something whose persistence troubles the humane mind.

     So, what is there to be when, in economic terms, you are committed neither to a command economy nor to the free market, and when you do not find salvation in religion or in race?  In my case, a devout belief in our constitutional political order—an order that stands on its own feet, compatible with a broad range of economic policy, from forms of socialism to forms of capitalism.  Hospitable to the varieties of religious experience without—in spite of some views—needing a religious basis.  Compatible also with a broad range of ethnic policy, from complete integration and ethnic blindness to acceptance of a variety of ethnic clustering. 

     I am, in short, although I have devoted much of my professional life to political philosophy or theory, a simple, possibly naive, believer in the American Constitutional Order.  I owe it the loyalty of a voluntary commitment. I enjoy rights under its aegis. I am bound by the obligations it imposes.  It is the framework within which legitimate political energies are to be deployed.  It is, above all, the great unifying force underlying the vast diversity embraced within the United States.

     But to take this seriously imposes some burdens. Although what I have just described comes close to being the standard American creed its status within the American academic world is that of a “myth”–a suggestive story not to be taken as literally true or, if so taken, a kind of political fundamentalism forgivable in peasants, not in the intellectual world.  Political “enlightenment” displaces the familiar myth by a more realistic description of the interplay of self-interest and “power.”  “Consent” becomes mere habitual acquiescence, “obligation” a shadowy “moral” notion.  “Realism” displaces the simple-minded habits of normative idealism.

     My self-imposed burden has been the thorough-going defense of our governing myth.  Even as I say this, I am struck by how much of my intellectual and teaching life it sums up.  I have been and am a defender of the faith, a defender of the naive against corrosive sophistication, resisting the seductive appeal of street-wise “realism.”

     To begin with, I was, quite early, preoccupied with the problem of consent, with giving meaning to a moral commitment to membership in the concrete political community—not consent as merely an aspect of a hypothetical model.  My first book, Obligation and the Body Politic, developed my version of an “agreement” theory of the state, drawing heavily on theorists like Hobbes and Rousseau, summing up years of reflection and teaching.

     I have no doubt that I have agreed to abide by, to live under, the Constitution, that I am a voluntary member of the polity and, I believe, most citizens of the United States, if asked to consider their own status will agree that they are consenting members. But there is lots of resistance to the idea. “When did I agree?” is a common challenge, and even the production of a signature on the dotted line does not still protests.  I have always found this an interesting and illuminating question exposing the roots of political commitment, but I will not pursue it here.  It was, as I’ve said, an early theoretical and educational preoccupation. And it may reflect, as I look back, an over-emphasis on the dependence of “obligation”  upon “consent” or “agreement,”  caught up in what Maine called the progressive movement from “status to contract.” In my fuzzier old age I find myself more hospitable to claims that obligations are generated by other things than consent, although consent still serves as a paradigmatic case of obligation-generation.

     It is obvious to me now that to a believer in the political covenant it was natural to study the covenant itself, the Constitution. And, perhaps unaware of the obvious connection, I spent years in the study of constitutional law and judicial theory—a study that has all the fascination of casuistry, of hermeneutics, the art of interpretation, an art developed in any culture bound to square conduct with a sacred governing text, an art illuminated by deadly satirical masterpieces like Pascal’s Provincial Letters

     I cut my teeth in a collaboration with my guide into the world of law, the gifted blind professor of constitutional law, Jacobus tenBroek.  We spent a year on an analysis of the Equal Protection clause of the 14th Amendment, in the course of which I became an addicted follower of the work of the Supreme Court, acquired my own set of Supreme Court reports and for decades read the court’s opinions.  While I found a broad range of problems unexpectedly fascinating, I was especially interested in the civil liberties, free speech, or First Amendment area. Eventually I wrote a book about the relation of government to the mind, committing the unforgivable sin of mentioning both in the same sentence or paragraph or neighborhood.  I find, on looking it over a quarter of a century later, that it says most of what I would want to say about freedom of speech, although I seem to have wasted a good deal of time lately in trying to find more to say on that subject.

     But my final concern with the Court and the Constitution has been with the problem of judicial “activism.” The standard model of the Court (the Supreme Court) in our system is that of an essentially non-political referee. Its pronouncement that an act of the legislative or executive branch is illicit or “unconstitutional” is a declaration that some relevant rule has been violated, not a declaration that the Court, in its wisdom, is imposing its judgment of policy on the situation. It is not supposed to be, as are the other branches of government, “political.”

     But merely to state this, except insincerely on ceremonial occasions, is to evoke the knowing smiles, if not the outright sneers or Homeric laughter, of the enlightened, the disabused, the realistic cynics who see through all that, who know the inside story; whose own youthful idealism was cured by clerkship or an equivalent spell in the sausage-factory of judicial life. Politics! Politics!  Justices in their robes are simply politicians in drag!

     As usual, life is easier for cynics or “realists.” They are out from under the burden of making sense of a governing myth. They may have some difficulty in explaining, in a satisfactory public way, what it is that a judge is supposed to do, how an Opinion differs from a Brief.  But they manage well enough, aided by the habits of obfuscation, unintelligibility, and philosophical simple-mindedness.

       Predictably, in retrospect, I took the rockier path. I defended the myth, the fundamental distinction between the judicial and the political, without the simple-mindedness of being a “strict constructionist.”  The problem, here as elsewhere, is that of defending a basic myth (itself a notion of great complexity) without falling into the assumption that a fundamentalist reading is the correct reading, or at least holds a preferred position.  This is a problem of great significance for the theory and practice of constitutionalism and governmental legitimacy.  But I will not pursue it here. I have discussed it in my unpublished article—rejected by all the leading law reviews—on   Judicial Activism and the Rule of Law.

     It seems characteristic that my teaching activity should focus not only on the conception of education as initiation into some ongoing activity or vocation or culture, but that I should think of “liberal” education as initiation into the great political vocation, the ruling vocation. Although this subject is very close to my heart and one which I discuss endlessly at the slightest excuse, I have written enough about it (Experiment at Berkeley and The Beleaguered College) to enable me to curb the impulse here.

A Word on Plato

     Most of what I have said above indicates the pervasiveness of the influence on my intellectual life of a dominating conception of the “political”—from social contract theory, constitutionalism and the law, to liberal education.  But I have not yet mentioned another influence that, although deeply political, falls quite outside the range of this tradition—not a contract theorist, and not, challengingly, a democrat:  Plato.

      I will not even try to sum up what Plato has meant to me.  From my undergraduate encounter with the Apology and Crito and The Republic to the present he has been a haunting presence.  For years I taught The Republic in the introductory philosophy course.  It was a departmental tradition, not my own discovery or creation, and I entered into it with growing appreciation.  And today, more than a decade after retirement, I cannot forgive the department for irresponsibly abandoning that tradition. 

     I am not sure why, in spite of his importance for me, I have not written or published anything about Plato. Perhaps it is because of his importance to me, and perhaps because of my shame at knowing no Greek.  But I will try to make a few points here of a peripheral sort.

     First, using both a contract theorist and The Republic in an introductory course I became increasingly aware of the radical difference between them.  Why did The Republic have so little, nothing really, to do with the ideas of rights, duties, obligations, legality, legitimacy, that loomed so large in the works of Hobbes, Locke, or Rousseau?  The short answer that eventually grew upon me is that The Republic is not a book about political theory or philosophy, not an analysis of the conditions or limits of political legitimacy.  It is an essay in “human obstetrics,” on bringing the human adult to term, on what happens in the second womb—the marsupial pouch—the polis within which we develop our characteristic selves, our cultured natures. It therefore invents psychology, sociology, educational theory.  If it is a book about government it is about the government of children, about the great formative period of human life.  If this is a bit enigmatic, I will leave it with the suggestion that the art of government has these two related but distinct aspects. Beyond that, one of Plato’s great gifts to the world is his presentation of the figure of Socrates, the Socratic Gospel according to Plato.  One of the unique individuals of Western history—great, original, who, nevertheless, was in no sense an “individualist,”  who considered himself a son of Athens, a functionary, a midwife, a gadfly, an unpaid teacher working for, deeply rooted in, the City; the enemy of sophistry in any of its recurrent incarnations; giving his life, bafflingly in the end, in the midst of discourse, as a sacrifice to the foolish majesty of the Law.

     But the disturbing influence of Plato is, for me, the fact that he is not a believer in democracy.  Governing is, for him, a necessary function and, like any function, he thought it should be performed by those who are best fit for it.  Not by those whose bent was for power or glory or wealth or producing and  consuming, but by those whose possession of knowledge of “the good” fit them for the practice of the art of governing—the art of guiding life by the vision of the “good,” the proper end of intentional activity.  Governed, in short, by wisdom.

     Plato traced the path of degeneration from the government of wisdom through the government of the heroic, the government of the rich, of the producer or artisan, and ultimately of the consumer in its restrained and unrestrained modes—from wisdom through the anarchy of desire.  In a brilliant presentation of the range of types of polity in which the departures from the reign of wisdom are seen as forms of corruption (or sickness),  Plato grants that, corruption for corruption, the democratic reign of desire, of the consumer, is the least harmful.  This is a sort of backhanded compliment to democracy—if we assume the operation of a sort of law of inevitable corruption, then democracy is the best, the least harmful, of the corrupt forms.

     Needless to say, this “justification” of democracy gave me little solace. One might grant that compared with the horrors of a theocracy gone into fanaticism, with a corpse-littered Napoleonic addiction to glory, with the heedlessness of a plutocracy to the horrors of the life of the unpossessing—compared with these, the mere excesses of a non-judgmental consumerism, the democratic equality of desires, seems the least of proffered horrors.  But this hardly solved my problem.

     The problem was how to make sense of the theory—the essential core of democracy—that each citizen in a democracy, in addition to his ordinary functional occupation, was to participate, was fit, or potentially fit, to participate in the great and crucial art and function of ruling.  This was not, indeed, to simply raise a clamor for what he wanted, to squeak like a wheel needing grease, but to deliberate about the common or public good and act to promote it.  To marshal wisdom to the direction of public affairs.  Wisdom?  Did the common man have that?  Knowledge of “the good”?

     The obvious move, in the light of recent fashion, is to do something with “value judgments.”  The upshot of which would be that assertions about “goodness” are neither true nor false, or are equally true, or “relative” or, in one way or another, not subject to the judgment of “reason” so that the views of the common man about what is “good”—whatever his functional station—are as worthy of respect as those of any self-appointed intellectual elite. Thus, the case for democracy is made to rest on a skeptical relativism and an egalitarianism that counts each person as “one” and accepts the quantitative weight of the majority as determining the public good. The “good” is what I want;  and more I’s outweigh fewer I’s; and anyone can play—so majority rule, that is, democracy.   

     Obvious as this line of argument is for anyone exposed to the élan of Logical Positivism and Value Theory in the course of his education, I am not happy with it, do not take it seriously. Anymore than the view that I found widespread that “skepticism” was the necessary philosophical basis for a humanely liberal democracy—in spite of the obvious example of Hume who was both a skeptic and a political conservative.  But I will not pursue this argument here.  In the end, I declined to take this path.

     What then was left?  How reconcile my love of Plato with my devotion to democracy?  Where, in not being a democrat, did Plato go wrong? And the answer was to be found in the vicinity of that kind of knowledge called “knowledge of the good”—the wisdom needed by those bearing the burden of governing human affairs.

     Where did Plato go wrong?  What a question!  Was it Emerson who wrote on a student paper submitted to him by Oliver Wendell Holmes, Jr., in which the future Justice attacked Plato, “If you strike at a king, you had better kill him!”  Where, I gird myself to say, did Plato go wrong?

     His mistake is to think that “knowledge of the good” is the highest rung on the cognitive ladder, the ultimate specialty. Whether it is a mistake or not—and I think it is not—I will consider the knowledge required for ruling, “knowledge of the good,” as the same as “wisdom”.  And my assertion is that wisdom is nothing like a cognitive specialty, to be acquired after an immersion in mathematics and the exercise of dialectic.  Being wise is more like being balanced than like knowing mathematics. Wisdom is more like common sense than like cognitive brilliance.

     In fact, “common sense” is the key to the mystery. Everyone, more or less, has some, more or less. It is a natural part of the equipment of a “rational animal” –almost a survival necessity. Very much like a sense of balance.  It can be developed, weakened, strengthened, and when it is strengthened or developed it is recognized as wisdom.  Unlike any particular cognitive art it is the common possession of common humanity and its development is independent of the development of our intellectual specialties.

     In fact, our whole concern with a range of civil liberties—speech, assembly, press—can be seen as protecting the activities by which common sense can develop into wisdom. This is the real justification for free speech, deeper than the view that we are guaranteed a right to “express” ourselves, and it is the basis for the criticism of our habits and institutions of communication as they fall short of serving that function.    

     So the mistake I daringly and probably foolishly attribute to Plato is that of not seeing that the necessary ruling wisdom is an extension of common sense, a common sense heightened by the arts of discussion, and not a cognitive specialty like mathematics.  This, at any rate, is how I accommodate my love of Plato with my commitment to democracy.  I actually believe that wisdom is more like common sense than like “policy expertise,” and my faith in the possibility of democracy rests on that. Socrates, whatever may be the case with Plato, is not at odds with that.

“Academic Debris” is a collection of unpublished writings completed between 1993 and 2002

          

Eighty-Seven

I am almost 87.  So you may think you are not interested in my state of mind.  If you are lucky, you will be, but most of you won’t get here. You will run out of steam, falter, perish in one ingenious way or another and never get to see the world as it really is, the fascinating view from here, out from under the spawning haze and the turmoil of silly means-ends games.  In fact, most of my friends didn’t make it—are safely dead.

     It is worse than that—some of their children are dead.  This morning as I was getting ready to get up (I used to jump up, shower, shave, dress very quickly; now I have to get ready to get up) the bedside radio told me that MR, a prominent movie director, had died.  I had met him only once, and he was blissful in his baby buggy.  He was being pushed along a Berkeley street by a matched set of tall, lean people who had been, the year before, classmates of mine at the University of Wisconsin.  So they had married, produced a son, and the father was, like me, a graduate student in philosophy at Berkeley. But, unlike me, he had strong doctrinal views.  Strange, complex and, I thought, a bit crazy—a Chicago version of the then-popular semantic movement.  He was always putting up a heated defence of some elaborate scheme, convincing no one but remaining unshaken. Until one day he announced that he was giving up “this philosophical nonsense” and moving over to experimental psychology.   Which he did, working with a greatly respected professor.  He got his PhD, behaved himself, and within the usual half dozen years got to be an associate professor.  That meant that he had tenure, and he promptly told his department that he would never do another bit of “this meaningless research,” that he did not care to be put up for promotion but would, as a matter of duty, continue to teach his undergraduate courses.

     I would bump into one or both of them from time to time. They were avid fans of the theatre (having some tenuous connection with Genesee Depot, the base of the great team of Lunt and Fontaine) and he—apart from his teaching chores—devoted his life to golf. The son grew up, became a Hollywood director, made some very good films, and his parents would mention him with pride. The last time I saw him he blurted out his golfing feat: “I shot my age!”  He must have been about 80.  Soon I read his obituary.  His wife followed in very short order.  And now his son.  Apart from the fact that he despised philosophy, and probably psychology as well, I haven’t the faintest idea of what went on in his mind.  He took no part in university politics.  A  tenured golfer.  If there is a moral to that story, it escapes me…

     I do scan the obit section, more idly than eagerly, and note the thinning of my cohort. I am sometimes surprised that the newly-dead had not died some time ago. This is even the case with some of my University colleagues. A faculty club lunch regular will drop down to coming in once or twice a week. Someone will call his home to see if he is sick. We will get some explanation and he will reappear, but less frequently.  Then, we will notice, he hasn’t been around for some time. Another call home, another explanation, more absence and then, after several months, “Is he still alive?” I note that I, myself, have dropped down to only once or twice  a week.

     I am surprised to find myself writing about death.  I had not intended to.  It is not really a pre-occupation.  Although a short time ago as I scanned the obit page I had a vivid, certain, but calm feeling that my name would be there within a few years.  I was sure of it.  I remember a line from Larkin’s great Aubade, “Most things may never happen, this one will”.  Yes, it will.  All men are mortal and all that, but I had never had such a clear present sense of it applying to me.

      My mother lived to 92 and I did not enjoy, nor did she, the last two years.  I keep trying to erase them from the memory of her dashing combative poetic life. Why should a whole life be marred by a feeble, forgetful, futile termination.  Why remember that!

     How to be remembered?  An editor asked me for a picture of my mother to go with her last book.  My mother had not been without vanity and had left a great many pictures of herself—from her blooming twenties to her nineties.  What should I select?  A question that lured me into waters deeper than I had expected. The Philosophy Department has the custom of hanging a picture of a member,  donated upon retirement, in the philosophy library.  I had assumed that meant a picture taken upon retirement.  But one day, idly scanning the array, I was shocked to see the picture of a beautiful young woman, radiant and enchanting.  Professor H, I realised, but not as the nerve-haunted, wrinkled, shrunken character I knew in her 60’s but as the heartbreaking beauty who had once flashed brilliantly across the western philosophic skies. “Unbelievable vanity,” was my first thought.  How could she!  And then I came to my senses.  She was to present a picture of herself to the department,  a picture that presented herself, her real self, her essential self, as she really was.  Why pick a late distortion of that vision?  Which picture?  The Beauty or the Crone?  Was she all her life a beauty disguised increasingly by wrinkles?  Or always the Crone concealed for a time by the illusion of lovely flesh?  Or did she, do we, “change”?  As Kant said,” Only the permanent changes.”  But is there one picture that captures the permanent, essential, deeper you? Or that at least comes closer than the others?  Look at a bunch of pictures of yourself and I’m sure you will see what I mean.  Some will seem to be more like you than others.  Suppose a photographer follows you around for a day and presents you with fifty snapshots from which one is to be selected as your portrait. Would you or your lover or friend or enemy agree?   You would have to fight to subdue vanity as others to deal with other biases, but the skill of the portrait artist lies just there—to find that one image that expresses the deep self or character of the person. 

     I considered sending the editor my mother’s favorite picture of herself—an early fortyish pensive profile resembling a Alla Nazimova, a striking actress my mother did not mind resembling.  I did send it, but along with one even more youthful—unformed and assertive—and a late Sibylene pose.  The editor cut the Gordian Knot by using all three.

     As for myself, I resisted the temptation to present a picture of myself as a WWII army Major brooding over an intelligence report in China and presented something from a newspaper, aged by a struggle about educational reform in my mid-fifties. It resembles me even now.

     But the problem still intrigues me. Consider a collection of photos not all taken in a single day, but fifty photos taken on fifty birthdays.  Can you select or discern the one that best expresses your fundamental enduring self?  I recognize myself whenever I see myself in our photo albums.  Young, old, smiling, frowning I greet the same Me.  I am the same me in all the moods and stages. An artist, a student of character, should be able to discern the appearance that comes closest to capturing the real self.

     I am a bit surprised to find myself saying these things. Very unprofessional, very fuzzy.  I usually believe in clarity, and I’ve lived through the tangled jungle of arguments. The ghost in the machine! The real behind the appearance! The enduring and the changing! I have no more patience for arguments.  But still…picture, picture on the wall, which is the (truest? deepest?) of them all?

      I seem to be aware of the Me behind my appearances.  And so to Yeats’ great line: “Sick with desire and fastened to a dying animal.”  I am haunted by that line.  It is always popping into my mind.  I know! I know! I should understand that I am that dying animal, not something else fastened to it.  But the fact is I feel, with Yeats, fastened to it.  My borrowed knees are crumbling, my borrowed arms ache.  And why did Yeats not see that the desires that tormented him were only part of the animal to which he was fastened?  Or did he think “I desire, therefore I am?”  What is left when you shed the dying animal?

     And I always drift from Yeats to Milton, to Belial’s great response to Moloch who thought that death was a reasonable alternative to living in Hell.

        To be no more; sad cure; for who would lose,
        Though full of pain, this intellectual being,
        Those thoughts that wander through Eternity,
        To perish rather, swallow’d up and lost
        In the wide womb of uncreated night,
        Devoid of sense and motion…          

        I can unfasten myself from my crumbling knees;  from “desire;”  even (did Sophocles say it?) from a harsh and furious master; but not from “this intellectual being.  “Cogito…” Or that too?  I had never thought of it as “as long as I think, I am.”   But, as a matter of fact, both are true: I think I am a dying animal, and I feel fastened to a dying animal.

**********************

     Along with the concrete realization that I will die fairly soon (will I totter past 90? And do I really want to?) it dawns on me,  startlingly, that I will not see how things come out.  I follow politics avidly, but now I realize that I will not see what happens to Hillary or educational reform or Israel or the next Presidential race—certainly not the one after that—or the composition of the Supreme Court. I will never know how things come out! A strangely sharp, if belated, realization.

     How will this affect me? I recently watched the running of the Kentucky Derby on television. The preparation, the speculation, the interviews, the slow movement of the horses to the starting post, and finally, the race itself. Suppose I knew that I would not be able to see or ever find out about the finish?  Would I still have watched?  I now feel that I am watching many “games” and will not know the final scores. This is sinking in rather slowly.

     A wise Greek—was it Plato?—described the three types at the Olympic Games.  First, the contestants; second, the partisan spectators, the fans; third, neither contestant nor fan, those who observed the whole spectacle, displaying, I suppose, Olympian detachment.  

     Now as a member of that third group, I notice that I skip more stories as I read the newspaper.  There are fewer conflicts that I bother to follow; they will run their course without the aid of my attention. I still follow the Bush presidency and hope for the re-emergence of a vigorous Democratic party dedicated to the defence of public expenditure, rediscovering the virtues of “tax and spend.”   I consider the fight over tax cuts versus public expenditure as the significant political struggle of the day, and I follow that more as a partisan than mere spectator.  In foreign affairs I still follow the news of Israel and, for related reasons the Irish struggle. But on such things as the energy problem I don’t bother—I have my own energy problem.

***********************

     “The human race is just rotten,”  said a coffee house friend reacting to the menu of horrible news of murder and mayhem in the morning paper.  The news was horrible, as it often is, but I do not let it really depress me or move me to condemn mankind as an ugly species.  I fall back, for consolation,  on a pair of my old aphorisms.  “Always remember,” I tell myself, “that we have at least created the ideals we betray.”  We have created the very idea of “justice,”  the betrayal of which, by us, moves us to condemnation and despair. We have projected the idea of  “brotherhood” beyond the small family to embrace all mankind—a daring move certain to yield tribute to urgencies of “friend” and “enemy,”  “we” and “they”.  Born and raised in the jungle we have dreamed of and tried to build, to create, the great City on the Hill.

     We stand on tiptoe to reach the height we yearn for, and we learn, over and over, that we cannot really live on tiptoe for long or for too long or forever.  We, our heels, always come back to earth. 

     The other aphorism, giving us a possibly passing grade:  for angels, terrible! For animals, not bad!

Written in 2001 at the age of (almost) 87

The Religion of the Marketplace

The “Marketplace” is the central image of a new religion, rising out of the ruins of a century marked by devastating war and by a remarkable run of insane rulers and intrusive bureaucracies that have destroyed faith in politics as capable of producing a just and happy human order. It is a situation ripe for the emergence of a non-political—an anti-political— salvation creed and, lo! it has emerged.  It is the Religion of the Marketplace —universal in its appeal, easily intelligible, militant, triumphant, sending its missionaries, disguised as IMF agents, to keep the faint-hearted from straying from the new Tao.    

     The Faith has three basic dogmas: the primacy of desire, the creative and saving energy of competition, and the tolerant inclusiveness of “non-judgmentalism.”

The Primacy of Desire

     We are creatures of desire, driven by wants, needs, urges, cravings, passions. Living is a processs of slaking thirsts, of appeasing hungers—whether simply for food, drink, shelter, warmth, affection, love or, beyond that, for status, power, glory, or curiosity.  Whatever the craving, we seek satisfaction.  What is the pursuit of happiness if not the quest to satisfy desires?

     A creature of desire is, in intention, a craver of consummation, a consumer. When we are sick and seek health we are no longer patients but consumers of medical services.  Students, no longer understood as apprentices, are consumers of educational services.  Soon we will hear that a child is a consumer of parental services, a spouse a consumer of connubial services, a parent a consumer of generative joy. Customers all! (And all poised to sue.)

     Traditional religions are ambivalent about the pursuit of desire-satisfaction as a way of life.  Freedom from the tyranny of desire, from being a slave to desire; or the disciplining of desires for the sake of some significant commitment; or the curbing of some desires as the conquest of our lower or selfish natures — traditional views such as these provide a faintly sinful aura  to the life of desire, distinguishing it, as “vanity,” from the genuine pursuit of happiness.  

     But Marketism daringly asserts the legitimate centrality of desire and moves it into a more respectable neighborhood.   The undeniable significance of desire draws its strength from the great desires that express our fundamental needs, our literal hungers and thirsts.  Less “basic” desires enjoy a borrowed strength.  What we desire, what we want, reflects some sort of need, and its satisfaction is some sort of pleasure or “good.”  In an egalitarian mood we echo Bentham’s remark that “pushpin is as good as poetry”. Scratching any sort of itch is satisfying, contributing to our happiness.    But beyond being hospitable to the wide range of desires, we  elevate the status of desire.  The movement from “I want” to “I value” is easy enough. From what “I value” to what “I approve of” and consider “good” is hardly a step at all.  The “good” is simply the object of desire, and satisfying desires  gets ennobled into the promotion of the good.  It is nice to hear that as we satisfy our desires we are really serving “the good!”  Arguments about what is “good” become arguments about desires or tastes;  and, as every schoolgirl knows, “De gustibus…”

      Marketism, far from celebrating “freedom” as liberation from the thralldom of desire, even approves of the cultivation of desire, of stimulating demand, of increasing consumption, of transforming faintly felt desires into urgency, of creating longings, of spurring us into habitual shopping even without felt needs.  And “credit,” highly democratized, makes it unnecessary to suffer the pangs of postponed gratification.

     So, in an act of theological daring, the pursuit of the satisfaction of desires is freed from sinfulness; is given a legitimacy that traditional religions are reluctant to bestow; is transformed into a positive virtue.  Marketism forms an interesting alliance with Individualism, holding not only that the individual is the center of significance—a cluster of desires that each is free to satisfy—but also that if each pursues his own interests the well-being of society is served—and much better than it would be by anyone so misguided as to try, ignoring his own interests, to serve the interests of society directly. (Something about an “invisible hand”…)  In short, Marketism interprets the hallowed pursuit of happiness as the search for the satisfaction of desires. Man is essentially a consumer, a craver of consummation,  and all his energies and arts are properly instrumental to that end.

Competition—Profit, Glory, and Fear

     If there is to be consuming there must, of course, be producing.  This is to be directed not by a simple desire to satify need, but by an intention to profit by satisfying that need.  No longer must we depend on the utopian hope of the altruistic impulses of others to satisfy our needs and desires— perhaps this works for lovers or in some families, but not in the “real world.”  Instead, we invest our faith in the operation of the self-interested profit motive.

      The power of the profit motive is aided, when it flags— and even when it doesn’t—by the great spur of competition. Sloth is the problem—it seems to have jumped ahead of Gluttony in the hierarchy of the Deadly Sins—and competition is the cure. 

     Competition , as we know, has two facets.  It offers fame, glory, the laurel wreath, status, the glow of victory, a greater market-share. And, on its darker side: defeat, failure, the fear of nothingness. Fear is the spur that touches even those who do not wish to enter the great race, for one must fear losing even what little one has. Just  as the fear of punishment ekes out the simple respect for the law, so the fear of failure is a vital aspect of the power of competition.    There is hardly a problem for which competition is not offered as a remedy.  In the troubled field of education, for example, we are offered not concrete educational or pedagogic proposals but the promise that if we open the public schools to competition they will be forced to improve.

     Thus, combining the desire-centered life of consuming with the energizing power of competition creates the basic elements for the marketplace way of life.

Tolerance or Non-judgmentalism

     There is another essential feature of the religion of the marketplace, almost as important as desire (consumer demand) and competition.  It is less a driving force than a theoretical defense against impeding criticism.  It may be called “pluralism” or “tolerance.”  Its enemy is “judgmentalism” and if it were not so cumbersome I would unashamedly use the term “anti-judgmentalism”.

     Consumers make choices, with or without the aid of seduction. The choice may be said  to express consumer judgment about goods.  Taken together, consumer choice  is described as the judgment of the market.  And the judgment of the market, when we see it summed up in an annual report on consumer spending, may startle and even shock us.  So much on X!  So little on Y!  We may be tempted to think that we really ought to spend much more on Y and less on X. That is, one might presume to pass judgment on the judgment of the market! And this, in the end, poses a deadly threat to marketism or, as it prepares to ward off a blow, the “Free Market.”

     For lurking at the edge of the marketplace are some enemies. There are a variety of moralists eager to advance their favorite “prohibitions” against the buying and selling of things which they disapprove, regardless of the desires of willing buyers and eager sellers.  And more troublesome than mere moralists are  political institutions which are more than willing to pass laws and regulations which override market judgment of goods by political assessments of virtue.

     The problem for marketism is how to protect the judgment of the market, representing cumulative  desires, against the critical claims of “Reason.”  On the one hand, there is what we seem to want, what we actually choose, and on the other hand there is the judgment we make when we think about it. How can we protect “doing what we want” against the arrogant criticism and even correction by “Reason.”

     Coming to the aid of marketism is a renewal of the ancient attack upon Reason.  The attack has two familiar wings—first, the denial of the possibility of “objectivity” and second, the declaration of the irrelevance of “reason” to what we call “value  judgments.” While these two “philosophical” positions were not designed, conspiratorially, to support Marketism , they have nevertheless served the cause. Let me try to state the case.

     About “objectivity”—I will skip directly to an over-simplification.  We are all locked into our own points of view and are unable to escape or transcend them. We never see things as they really are (the thing in itself) but as they seem from our  particular slant or bias.    There are many opinions and, on the extreme view, none can claim to be “true” except as an attempt at oppressive hegemony.  Our “reason” is essentially the cunning of bias. It is not a fair, impartial judge, and the appeal to it as “objective”  is merely fraudulent and self-serving.  All is subjective , and reason is only “my reason” and really no better than any other.  We are all lawyers writing our briefs in a world without judges. There is no “right” view— since God is presumably dead—but only our partial biases.

     And second,  Reason, in any case, is unable to judge or evaluate “values.”  There is a long tradition about this.  Pascal thought that the heart has its reasons that reason knows nothing about; Hume thought that reason is and ought to be the slave of the passions, not their judge. Some Pragmatists thought that reason was  “instrumental,” concerned with “means,” not with the judging of “ends” or goods.  Others have thought that judging something as “good” only amounted to saying you liked it.  The brash American version is “The customer is always right!”  All such positions have the effect of freeing desires from the controlling judgment of “reason,” of freeing values, and even the values we call “moral convictions”—ultimately, like all values,  matters of taste— from domination by rational judgment or from the effects of thinking about them, or from criticism of popular taste by intellectuals or elitists.  “That’s what you think!” is the ultimate retort.  And who are you to think that what you like or  think is better than what anyone else thinks!    Better to be tolerant, to recognize the plurality of “goods,” faintly echoing another religious tradition: “Do not judge!” or “be not Judgmental!”

        Thus Marketism seeks to free itself from the domination of Reason, leaving the judgment of the market triumphantly holding the field.  It expresses what people really want—offering happiness deeper than might appeal to a thin, bloodless, Reason.  Banished forever is intrusive political meddling, in the name of that Reason,  into the deeper affairs of the heart.   The dying faith in the Responsible Mind gives way to the faith in the Invisible Hand.

     These , then, are the elements of the religion that seems destined to take over the world.  The consummation of desire is the bottom line.  Competition is the exhilarating motivational spur. A tolerating non-judgmentalism its democratic, peacekeeping, universalizing spirit.  The irresistible simplicity of this trinity is buttressed by a powerful and subtle intellectual structure—a  “model” with principles like “comparative advantage,” elaborated beautifully at centers of  theology like Chicago, Wharton, and other Business Schools. Mathematics is the sacred tongue, a universal language accessible to the educated on all continents, in all modern cultures.  With the collapse of the powers that claimed Marx as their prophet, there is really no serious competitor. Who can challenge the claim that the world is now a “global Marketplace” whose imperatives relentlessly displace faintly persisting older creeds and ignore archaic political landmarks?

     Seeing the triumphant progress of the religion of the marketplace one can sympathize with the old Roman contemplating the rise of Christianity—incredulous but resigned.  But Marketism is easier to believe in; it requires no strenuous exercise of credulity. And its heroic novelty—transforming the unabashed pursuit of desire from vice to virtue—is, from a moral point of view, a pleasant bonus.  We can finally proclaim, whatever was once said about “the eye of a needle,” that “rich is beautiful!”

     There is little hope that anything can really avert the triumph of the religion of the marketplace. Trying to stop it by “refuting” its doctrines is an exercise in futility. Its appeal, especially its anti-political animus, is, I think, impervious to rear-guard theological squabbles.   In concrete struggles between the political and the economic, between the forum and the market—as in the attempt to protect political borders against the market-driven mobility of labor or goods—the few victories of politics seem only stop-gap and temporary. How can we resist the vision of the world as one great shopping mall with each of us a happy shopper, regardless of color, creed, or sexual preference,  with universally accepted credit cards? Nevertheless, I will  point out some soft spots that may, in the long run, mar or trouble the triumph of Marketism.

Problems With Desire

        It seems we must always relearn the hard way that the pursuit of happiness is not to be confused with the pursuit of pleasure, with the quest for what we desire, the search or struggle to get what we want—with all those consummations for which we may devoutly and misguidedly wish. The Market offers to give us what we want.  And yet…

     It is obvious that we don’t always know what we need.  But more bafflingly, we don’t usually know what we want.  We are mistaken in what we think we want. We regret having chosen what we thought we wanted.  We are disappointed when we get it. This  is one of the oldest stories in the world and I bring it up only to remind us that in building on “desire” we do not avoid the perils of uncertainty, error, regret,  disillusionment—and the emptiness of fulfilled desire. “I know what I want” is a pervasive cognitive error.

     That “what I want”—assuming I know what I want—is “good” for me is another familiar error.  You cannot treat the desire for nourishing food and drink and an addictive desire for cigarettes on the same level. From the point of view of what is good for you there is a hierarchical array of desires, some good for you, some disasterous, that the life of satisfying desires must come to terms with, about which we learn—if we live—to become judgmental.     

     However “inclusive ” we may wish to be in asserting a democratic equality of desires,  we are forced , in spite of our reluctance, to invent or discover or fall back on some boring moral categories. We discover, unless we are complete idiots, that some of the things we may desire are—dare I say it— not good for us, but bad.  In the midst of a glittering market array of temptations, unspoiled by admonitions like “Is this really necessary?” or “less is more!” —in a world of skilled blandishment , of cold professional seductiveness—still we learn to say “No!” to some desires, we learn to judge.

     This means that there is a built-in conflict or tension in the marketplace way of life. We need to develop the art and discipline of choosing, to become skilled defensive shoppers.  But on the other hand , the marketplace develops the arts of enticement and seduction (advertising becomes a major industry) and claims the right to fan the flames of desire.  Protected by the claim of “freedom of speech,” the seduction industry— strangely legitimized and largely freed from moral disapprobation—has become a major social power, threatening to dominate the Forum as it has come to dominate the marketplace.  The pitchman now overpowers the teacher as, long ago, the Sophists were able to still the voice of Socrates.  Who can rejoice in the victory of folly over wisdom?

      I hardly pause to note that some great religions and many secular sages , seemingly unaware of the facts of life, of the glory of the Mall, still refuse to find happiness on the treadmill or the merry-go-round of desire. But that message does not echo in the Marketplace.

Problems with Competition

     In a gaming or sporting culture  it is hardly necesssary to explain “competition.”  We know how it evokes greater effort, discipline, excellence.  It is a world of winners and losers.  The winners are better, always reaching new heights, setting new records, achieving what , without the drive of competition, would never be attained.  The “contest,”  as an old professor of mine would have said, is, for our culture, a “root metaphor.”

     The problem, however, is that life is not a contest; that competition is overshadowed in significance by its little brother, cooperation; that for some of the best things in life , competition is a destructive intruder; and that even in the world of contest, losers are often—in important respects—better than, healthier than, nicer than, and even happier than winners.

     In the world of the corporation, girding itself for competition, the “team” is the active unit and the internal life of the team is forcing the discovery or rediscovery of an ancient moral insight. Trust, truth, responsibility, interdependence, dedication, subordination to a common cause, the principles of teamwork—all these are destroyed by internal competition.  The arts of cooperation to achieve a common goal dominate, even as a team prepares itself to compete, and the spirit of cooperation is so infectious that “merging” offers itself as an alternative to competing.  Joining forces in cooperation is so appealing that we have invented anti-trust laws—a political intervention— to keep competition alive among those who, left to their own tastes, would prefer non-competitive peace, collusion,and even monopoly. Wherever there are teams, the cooperative arts subvert the competitive, even in the strongholds of the marketplace.

     However energizing it might be, competition is really out of place in the world of things that we value for their own sakes. Lovers are not in competition with each other.  The fellowship of scholars, the pursuit of truth, is maimed and corrupted by the intrusion of competition.  I remember the shock of reading how a pair of scientists shamelessly hid some photos from Linus Pauling lest he would figure out what was going on and beat them to the Nobel Prize—that great corrupter of fellowship.  Every teacher knows that competition for grades destroys genuine learning.  In the greatest of things, the challenge is to master an art, not to defeat or outdo others.

Problems with Non-Judgmentalism

      As for “non-judgmentalism”—this is perhaps the deepest of the theological roots of the new order.  Essentially it rejects the great normative categories that define a civilization. It transforms the older notion of Original Sin into the conviction that the human world is a “sick” one, needing not a scolding minister but a compassionate therapist to restore the self-esteem destroyed by the sense of guilt that haunts the neighborhood of commandments and moral rules that, of course, we inevitably violate. Treatment is needed, not moral disapproval and punishment.

     It’s true, of course, that rules, exhortation, and punishment will hardly cure our sicknesses.  Scolding may be out of place when we are engaged in treating and curing. “Ministering” is wonderfully ambiguous in this respect–“There is a time for judging and a time not to judge.”  But popular “non-judgmentalism” goes beyond this, moving from the view that judgment is sometimes out of place to the conviction that it is always intrusive and inappropriate.  Who are you to judge that my taste is bad?

     At its starkest, then, market non-judgmentalism rejects the appropriateness of the great normative categories—true-false, good-evil, right-wrong.  “I want, I like, I believe” is, for each of us, for each culture , each sub-culture, for the least among us, the ultimate assertion.  “Relativism” and “skepticism” name traditional philosophic positions summoned to aid the dignity of our private, individual feelings and points of view, from demeaning subordination to external normative standards. 

     The current image that expresses this mood is that the mind is a great marketplace of ideas.  The best test of truth, we are told, is the ability of an idea to prevail, to seize its share in the marketplace.  If it sells it is , to that extent, good or true or, for that matter, beautiful.  Thus, significant judgment is simply the judgment of the market. But it is hard to think of a more inept metaphor for the mind than the “marketplace of ideas.” In truth, the mind is to a marketplace as a keeper is to an insane asylum. 

     It is really pointless to say that the whole thing is silly. Desires are real and important although we know that a life spent in the service of desire is a disappointing perversion of the  persuit of happiness. Competition earns its garlands, but as a world-turning force it is not in a class with Love. Non-judgmentalism has its legitimate moments but the Good, the True and the Beautiful are not simply matters of taste about which there is no disputing.   The common sense of social animals can be momentarily swept aside by ideological zeal or silenced by confusing sophistical loquacity but it will, sooner or later, reassert itself and display sanity. But in the meantime  we will have to live through a period of triumphant, militant Marketism. We will live in the glow of the Global Economy that reduces parochial cultures to obsolescence, that will transcend the barriers of political boundries. We will all become, if we adjust to the new creed, canny consumers, prudent investors, imaginative sellers or marketers, temporary employees constantly honing new skills and polishing our CVs as we seek new temporary jobs living shallow, restless, miserable lives reaching for fruit that, as Satan discovers in Paradise Lost, turns to ashes in one’s mouth.

Democracy and the Marketplace

     Marketism, even as it claims and shapes the future,  will still need a supporting and even supplementary structure that is political.  The enemy, of course, is the  “welfare state” but the approved political conceptions are comforting and even stirring— The Rule of Law, and Democracy. These are emblazoned on banners the marketplace emissaries  carry into backward areas preparing them to receive investments, and they are oddly appropriate for the Global Marketplace.

     The Rule of Law suggests a situation governed by certain rules, laws, or principles whose violation can be appealed to essentially non-political, independant judicial tribunals.  I say rules, laws or principles in order to make the point that more is involved than mere local positive law—the enactments of local “sovereigns.”  We tap into the great and ambiguous tradition of Natural Law —a tradition always concerned to check the arbitrary will of a local “tyrant” by an appeal to Reason or the Higher Law, or even Divine Positive Law.  It attempts to limit the willfulness of rulers, especially as rulers attempt to violate the sanctity of private property—the most firmly asserted of “natural rights”, more deeply felt than the relatively modern conception of Human Rights.  Thus, the Rule of Law stands for something above mere politics and to insist on it is to attempt to put some things, especially property rights (and the sanctity of Contract),  beyond political peril, to reduce the threat of political power. The “rule of Law” is an attempt to limit the scope of “politics,” of government.

     There is, of course, a scandal, almost a secret scandal, about the “non-political” character of the “rule of Law.”  Even as we proclaim it, our most sophisticated legal practitioners—the priesthood of the “Rule of Law”—deride the view that judicial judgment can be “non-political,” that there is, even when there is an apparent legal text, a “correct ” reading of the Law, that Judges do not project their own “values” into their presumably non-political decisions.  Such arguments about “judicial activism” undermine the view that the “Rule of Law”  is above politics and that international judicial tribunals to which we may submit disputes will not simply impose their politcal views upon us in the feeble guise of “objective” legal rulings. 

     So  we may discover that some international court, dedicated to the defense of the principle of “free trade,” will overrule archaic American political attempts to protect dolphins, or the environment, or the wretched of the earth, from the imperatives of the marketplace.  There will, I think, be some backlash against the inevitable challange to our sovereignty, but mere national political dominance is an ordained victim of the triumph of the world marketplace, of the new religion—one of whose tenet is “The Rule of Law.”

     As for “democracy,” or at least regimes characterized by “elections” with some degree of freedom or fairness, it is the recent boast of our Marketists that this form of government has spread through all of Latin America even as market economies have displaced varieties of “command” economies.  Who can minimize this achievement?  Political democracy and the free market!

     And yet!  The conception of Democracy that flourishes in the shadow of the marketplace is a far cry from that which brought unique dignity to the mere subjects of a non-democratic polity.  Democracy is a kind of two-job theory of life.  In addition to a career as a professional or some sort of craftsman or the performer of a primary function, even as an entrepreneur, each member of a democracy has, ex officio, a political role, a role as participant in the ruling function.  At the very least, each is a member of the electorate, the ultimate tribunal.  The exercise, by each of us, of that function is what gives significance to democratic life.

     The tragedy of much of modern democratic life, however, lies in the subtle corruption of that role.  Largely under the powerful influence of a marketplace culture, the citizen-voter is seen as a consumer demanding his or her share of the goodies, not as a ruler exercising disinterested judgment about the common good.  We have turned our elected representatives, in ways that would have horrified Burke or Mill, into our Designated Shoppers. Political discourse has been degraded into advertising.  Money, a foolish Supreme Court has said, talks. The political forum comes, sadly, to resemble the marketplace in which we are to act as customers demanding and getting what we want.  Why do we complain that citizens have come to despise “politics?”  It is a mark of sanity.  They know, or at least feel, that it is a corruption of what it should be.    They dispise its practitioners (although they may like their own shoppers) for corrupting “taking thought together” into “bargaining” —the corrupt paradigm of the mind in action. We have turned electoral and even legislative life into an ugly scramble for partisan advantage —a perversion that turns the normal stomach.  To a depressing degree, the “democracy” we proclaim is only a bazaar version of politics.

     So our Marketists promote a “democracy” transmuted into its marketplace version with citizens seen as customers who, when politically active, are merely on another shopping trip. It is, among forms of polity, least hostile to the identification of the “public good”  or the “public interest”  with consumer demand.  It is the form of politics most compatible with the marketplace creed.  Given the primacy of desires, competition, and non-judgmentalism, “democracy” is most inclined to take “privatization” in its stride as simply a bit more efficient and less hindered by sentimentality. It is the form of politics least likely to threaten the domination of the marketplace.

     Promoting the Rule of Law and Democracy defangs the political threat to the global economy while, of course, allowing the political institutions to provide the necessary infra-structure of economic life: the broadly conceived law and order necessary to promote investor confidence; education (or rather, “investment” in the value of employees); bankrupcy and bail-out provisions so that those who take risks should not be made to suffer too much and possibly lose confidence in the system. And even, perhaps, to do some things the market, intent on its own way of promoting happiness, may overlook—things like fresh air or clean water or forests or nice animals or safe food or even health (whatever that is).  Nicely-tamed politics may be permitted to do some useful things so long as its institutions do not get too big and do not presume to interfere with the global economy or the world market, with their elevating and liberating conception that at last, for the common man and woman, life can be a perpetual shopping trip.     My fear is not that the devotees of Marketism will fail in their venture but that they will succeed.

     As we contemplate the future of life in a global marketplace we may forget that real life begins when we leave the store, after “shopping;” that nothing that makes life significant takes place in the marketplace; and that everything of value—the search for justice, beauty, health, wisdom, love—is coarsened and degraded when embraced and brought within the sway of the marketplace; when the measure of all things, robbed of their glory,  becomes the “bottom line.”

“Religion and the Marketplace” originally appeared in New Oxford Review, September 1999.

A Venture in Educational Reform

In 1965 Tussman launched an experimental program on the UC Berkeley campus modeled after Meiklejohn’s Experimental College at Wisconsin in the 1930’s.  It was a two year program offered to a group of 150 entering freshmen meant to replace the normal curriculum for the first two undergraduate years.  The experiment lasted for four years.  This is a restrospective account of that endeavor written in 1988. 
Home of the Experimental Collegiate Program

Home of the Experimental Collegiate Program, 40 years later (2008)

I pass the house every day.  It stands at the edge of the campus, looking very much as it looked a quarter of a century ago. It had once been a fraternity house, but when I first got involved with it, in the early sixties, it had been standing vacant, almost derelict, not yet assigned to its university use. It became the home of the Experimental Program and the center of my life for four years—a slowly fading scar for a lot longer. It now, more sedately, houses a graduate program.  The last time I stepped inside, almost a decade ago, I noted the familiar mellow wooden panels in the great hall, now without a defacing collection of student poems protesting my behavior, and I saw, still in use, the enormous round wooden table that had taken up most of the room in my office—the table I had found in the university warehouse, around which the fabled Teggart had once conducted his seminar.

     There was a time when the sight of the house in the early morning produced a surge of anxiety, a deep reluctance to approach the door, to open it and step into whatever it held for the day. And for some years after I had ceased to enter, the mere sight of the building as I drove past evoked a vague sense of apprehension that dissipated slowly as I moved through the paces of a normal academic day—as a disturbing dream lingers and fades through an uneventful morning.  But now when I pass the house nothing happens.  It may be possible at last, “all passion  spent,” to recollect in tranquility.

     The question most difficult for me to deal with is “why did the Program fail?” I usually rush to explain that as an educational venture it did not fail; that it “failed” only in not establishing itself as a permanent part of the university.  But there is something unsatisfactory about that answer.  Why, if it was educationally valid or even significant, did it disappear without a trace?

     I have never really told the story of the Program.  In the middle of its third year I wrote Experiment at Berkeley, giving an account of the rationale and of some of the problems we were facing.  It was essentially a progress report.  I have never completed the report nor written anything about the problems of “educational reform” or about the state of college education generally, or joined very seriously in our local educational controversies.  When the Experimental Program had run its trial four-year life I returned to writing and to departmental teaching for the dozen years until retirement.  I did not turn away from the Program as a sad experience best buried in oblivion, but, obviously , I put off writing about it.  Experiment at Berkeley does give a good idea of what it was all about, so some of the things I would want to say have already been said—although without the benefit or disadvantage of several decades of reflection. There are things to add and things that deserve emphasis and amplification, but apparently not urgently enough to have overcome my reluctance to plunge back into the depressing world of educational controversy to reargue tattered issues. 

     Since this is reflection on educational reform let me say that I am not concerned with “normal” educational improvement. Reasonably good teachers improve with experience, although they may grow out of the stage of energetic novice enthusiasm whose glow may be mistaken for the aura of Socratic genius. The teaching of good teachers tends to grow better; the teaching of poor teachers tends not to improve, or not to improve enough to make up for the belatedly discovered mistake in hiring. Improving the educational system by improving teaching is obviously a good thing.  But American higher education is not in danger of being destroyed by bad teaching, nor, if it needs salvation, is it going to be saved by an outburst of great teaching or by the improvement of its normal teaching.  The state of the art of teaching is not , for the college or university, a life-threatening problem.

     And, of course, in a good or, as we are in the habit of saying at Berkeley, a “great” university, the level of conventional teaching is bound to be rather high.  There are always complaints, some even legitimate.  Classes too large or too hard to get into, confusing advice, preoccupied or unsympathetic faculty.  But the place is undeniably full of vigorous minds engaged in research and in teaching, full of bright students doing what bright students are supposed to do. Most consider themselves lucky to be where they are, not awaiting reform.

     Why then, at such a place, in the early sixties, did I think that a drastically different educational model should be tried? And not tried merely as one does an experiment to prove a point or a theory, but as an effort to bring about a significant change in our educational way of life—to work out a conception of an alternative pattern; to show that it worked, in practice, much better than the conventional pattern—much better, since if it was only as good as or a slight improvement over what we had, it would not be worth the trouble—and then to keep it alive as a regular and even growing part of the university and a model for adoption by colleges and universities everywhere.  That was the idea, the dream, the project.

     The Program was not a response to particular events or pressures.  Students had not yet discovered the delights of hurling themselves upon the cushioned cogs of the machine demanding institutional change, and, in any case, student educational demands, as they came to be made, were utterly at odds with the spirit of the Program.  There was deep irony in the fact that the student movement on the educational front fought under the banners of the system it thought it hated.  That is, it demanded “decontrol,” the abolition of “requirements,” consumer sovereignty, an elective system ad absurdum—the marketplace, in short.  Whereas, alas, I considered the model of  “the marketplace” applied to education or to the mind as bizarrely oxymoronic.  But I am getting ahead of myself…

     The Program, I repeat, was not a response to pressure. No one was demanding it or anything like it.  So far as I am aware, I did not need it. I had recently returned to Berkeley after a half dozen or so years in the East.  I was delighted to be back. I was a professor in the philosophy department.  I had tenure. I had written a book.  My classes were going well.  I had a backlog of writing projects.  I was not even on a committee to foster educational innovation.  Why, then, an unsolicited venture in educational reform?

     I suppose that a purely analytic treatment of the educational issues posed and faced by the Experimental Program could avoid that question.  The account of genesis is more historical and biographical than analytical; the order of creation and development is not the same as the order of justification.  But this is a sort of Apologia and an Apologia, if we can judge by its great models, is a complex mixture of the two orders.  At any rate, I will say something about the genesis of the Program, not so much out of autobiographical concerns as for its relevance to the problem of introducing significant change into the educational system.  Significant or at least drastic change—if that is, in the end, what one wants.

     There is a kind of ameliorative change that is rather easy to achieve. A professor can usually fiddle with his course as he pleases.  He can change its substance and its methods as he thinks best without anyone’s permission.  With little trouble he can substitute new courses for old ones and keep his teaching in line with his interests and educational convictions.  This sort of change is generally so easy that there is seldom an accumulation of frustration calling for drastic measures.  From the faculty point of view, being able to teach what one wishes, as one thinks best, without external interference, is, short of teaching less and in the extreme case not at all, to enjoy the good life.  To change, modify, improve the courses one teaches does not require one to be an educational reformer.

     There is also a familiar class of educational changes beyond these that, generally, do not interfere very much with the established way of life. Should a requirement be added or dropped for all or for a special group of students?  More math or writing or a foreign language or American or Western or World history? Should all students be required to achieve computer-literacy or ethnic-consciousness? Should a new or an inter-disciplinary “major” be established or the requirements for a particular major changed?  Should grading be tougher, more revealing than tactful, or forced on a curve, and should students grade their teachers?  Should we divide the year into quarters or semesters? Should we have small courses or seminars for freshmen?… Questions of this sort have popped up on the academic agenda for as long as I can remember, staple items in the politics of education, normally requiring collective faculty and Administrative action.  Faculty members differ in their degree of concern with such matters taken as general educational questions. They will, however, be alert to proposals that effect their own teaching, resistant to those that might require them to handle their own courses differently, and supportive of the claims of colleagues to teaching autonomy.

     It is obvious, of course, that all this is about courses, about their inner life and their external ordering.  The course is the familiar, the inevitable unit of our educational life. To teach is to give a course; to study is to take a course or a mildly ordered collection of courses; to administer is to arrange that the takers and the givers are properly brought together. The fate of the Experimental Program can be foreshadowed in a simple statement: In a world of course-givers and course-takers it tried to abolish the course.

     I need here to account for two things:  the shaping of the alternate conception of lower-division liberal education, and the motivation to try to bring it into existence.

     I forget who it was who first said “nothing is ever said for the first time”—broadened for this occasion into “or thought.”  Discovering what you believe is discovering the tradition into which you fall.  My deepest educational conviction is certainly not original.  It is that what we call “liberal education” is essentially the education of the Ruler.  It is not primarily aesthetic—for the heightening of “enjoyment,” the enriching of leisure.  It is not the education of the human being as human being. It is not education for scholarship or research or the professoriate.  It is not primarily spectatorial. It is vocational, and the vocation is governing or ruling—in a broad sense, politics. It is the forbidden fruit so deeply associated with our aspirations for participation in the ruling function. To say this is, of course, to raise all sorts of spectres and to summon hostile spirits from the vasty deep—worthy opponents, decent, well-motivated, cultured, humane, skeptical, tolerant, anti-authoritarian opponents—all honorable, although tending to archophobia and to regarding this “merely” political emphasis as a disparagement of the mind that is to be valued for its own sake.  Nevertheless, it was, for me, the conception at the very heart of the Experimental Program, without which I would not have tried to launch it and without which, therefore, it would not have come into existence at all.  It was, although I am resigned to the probability that this will seem at least paradoxical, the educational vision of a rabid democrat.

     I consider myself indebted for this view to my teacher, Alexander Meiklejohn, and I have been dominated by it for as long as I can remember, and long before the Program took shape.  I need to acknowledge, although I do so reluctantly, a fundamental disposition expressing itself in a drift into political and legal philosophy, manifesting an intellectual provincialism giving its special character to the curricular core of the Program. “Reluctantly,” because I would like to think of the Program as based on something more than a temperamental devotion to the “political” as against other claims.  At any rate, I start with the wonderfully baffling idea that liberal education is education for the ruling function, and the companion conviction that since everyone in a democracy is to share in the ruling function, everyone needs to share in the education reserved, in elitist societies, for the ruling class.

     To this must be added the perception that the college was not providing it and, rather fortuitously, that there was a vacuum where it should or might be.  The freshman and sophomore years, the lower division, is generally the wasteland of American higher education.  That is, in part, because we still tend to protect those years against the vocationalism or professionalism dominating the graduate schools and even the upper-division “majors,” without having a very clear idea of what we are protecting them for.  The lower division student is not yet under the aegis of a particular department, has not made his fateful choice and is considered to be engaged in remedial or preparatory or exploratory or even “general” education.  From the point of view of the dominant power structures—graduate and research oriented departments—the lower division is someone else’s responsibility, a holding area in which some grazing is done before, fleshed out a bit, the creature can be put to serious work. If anyone is responsible for the lower-division student it is probably a powerless dean trying to marshall some educational energy not elsewhere engaged (and therefore a bit suspect), for a venture professional scholars seldom find professionally interesting or important—except, perhaps, as an exercise in recruiting.  But this is an old and hackneyed tale and I will not linger over it. The Program idea was to take the conception of liberal education as broadly politically vocational and insert it into the spiritually empty lower division years—thus filling a deep but unfelt need while at the same time giving a significant point to an otherwise pointless phase of American college education.

     The curricular embodiment of this conception needs to be spelled out, but it might appear to be something that could be done in the usual way by stringing courses together and, if necessary, creating some special courses—by creating, in effect, a lower-division variant of an upper-division disciplinary or inter-disciplinary “major.”  Why, then, did the Program abandon the course structure and propose instead a single massive highly organized two-year program?  Since this was the distinctive feature of the program most responsible for its unique quality and for its special problems, I suppose I should explain. But I may do so slowly and in bits and pieces.

     Imagine, if you can, that you are a freshman newly enrolled in the Program.  You will be introduced to the idea that, unlike what you have been used to, you will, during the life of the Program, be doing—be thinking about—only one thing at a time. And you are told that for the next two weeks or so you are to spend all of your time, all of your time (well, almost all, since you will be allowed to take one course, a language course, for example, in addition to the Program. “Comic relief,” I heard it called) reading or “studying” Homer’s Iliad.  You are advised not to bother reading about Homer or The Iliad, not to consult secondary or scholarly or “critical” material–just to read The Iliad itself.  You will not be aware of the blood that had been shed in support of those instructions, of the academic proprieties being trampled on, but you might consider it a strange beginning to a college career. You had expected more formidable assignments but, accepting unexpected gifts, you decide to go along—not without the shadow of a worry that you may not be going to get a real college education after all. (“All the guys in my Dorm are already taking quizzes in three subjects and I’m still just fooling around with The Iliad!”)

     What we are trying to do, and probably without much initial success, is to lead the student into the experience of relaxed, enjoyable immersion, a sustained involvement of mind, in a great work whose significance is far from obvious and about whose significance nothing, at this stage of the game, should be said. Many people will go through college—through a lifetime—without such an experience. Two whole weeks out of your life in which your job is to soak yourself in The Iliad or something like it.  But this is not to be an exercise in solitary reading.  Everyone in the Program will be doing the same thing, including the faculty. And during those two weeks there will be some scheduled Program activity.  Informal lectures or panel discussions, seminar meetings, a short paper to be discussed in a private tutorial session.  And all, during that two-week period, on The Iliad. Intensive, undistracted, essentially enjoyment-directed. Clearly, all our resources are marshalled to encourage the having of a particular kind of intellectual and emotional experience.  It is very difficult to describe, but we have gone to a great deal of trouble to try to make it possible.  We have cleared the decks, provided the time, gotten rid of a distracting multiplicity of intellectual tasks, tried to discourage the desire for information of a scholarly, historical, literary, sociological sort, to restrain the tendency to find out what others have said, from doing “research.”  But what, you may well ask, is there left to do?  And why do that?  This is, I am afraid, one of those familiar situations in which it is futile to try to explain to someone why he should do something until after he has done it. Of course, freeing up time, telling students to relax and wallow in a book and try to enjoy it, doesn’t do very much.  If you have been taught to read rapidly you will have forgotten how to read slowly; You will simply read rapidly and wonder what to do with all the spare time on your hands.  And no one will enjoy something because a teacher tells him to.  And what, by the way—if it is difficult to explain what the student was supposed to do—was the faculty supposed to do?

     I am almost afraid to confess that the faculty was not supposed to do what it was supposed to be good at, what it had been chosen by the university to spend its life doing.  We were not to practice the “disciplines” with which we, as faculty members, were identified.  There were five of us.  I was a member of the philosophy department, usually teaching courses in political or legal philosophy.  I had recruited as colleagues in this venture: a political theorist with a great reputation as a charismatic teacher; a talented poet, a bit Byronic, who later created some havoc as the academic vice-president of a private University and died gloriously hang-gliding ; a radical youngish civil-liberties lawyer, a bulldog in argument, reputedly a strong “socratic” teacher; a mathematician-engineer, a well-known maverick, politician, golfer, and a man of broad culture.  None of us had grappled professionally with The Iliad.  What or how were we to “teach”?

     But I should explain first ( I see I may have problems with Shandyesque tendencies) how this odd crew came to be gathered around the large table contemplating such a strange problem. I need to make a rather delicate decision.  Or rather, explain it, since I have obviously made it. I am going to talk about faculty colleagues.  I need to, or I can’t explain why the odds are so stacked against the success of such a program. There is really very little written about the college teacher at work.  I used to read every academic novel I could get my hands on, and it is surprising how little is revealed about teaching.  Compared to the surgeon working in the glare of the operating room the professor’s teaching is normally a private affair, largely shielded from peer scrutiny.  Of the several dozen colleagues from a variety of departments with whom I have gossiped at lunch for several decades, labored on committees and manned the academic barricades with, I cannot think of one whose class I have ever visited or who has visited one of mine.  We assume, I suppose, that we can infer from a person’s ordinary behavior how he would behave as a teacher. Apart from ordinary risks in inference I have discovered belatedly that, adding to predictive risks, there are actually teachers who think of teaching as a performing art, and who, when they enter a classroom, become strangely transformed. Ordinarily, it may not matter. As long as each is enclosed in his own watertight compartment the great ship of learning can stay afloat even if some compartments collapse or shelter weird side-shows.  But if you put to sea in a single ark…

     Between the conception and the fruition lay a great many obstacles. The first step I took, before I discussed my plan with anyone else, was to drop in on President Clark Kerr.  As university president, he was not directly in charge of the Berkeley campus, but I thought his support would be useful.  I told him I wanted to try to establish a variant of Meiklejohn’s Experimental College and wanted to know whether, if I got through the campus obstacles, there would be difficulties at a higher level. I remember his pleased smile. “Ah,” he said, ” the revolution from below!”.  Of course he was all for it.  He believed in education and, as president, there was little he could do; education was in the hands of the faculty.  I told him I’d be back if I got far enough to need his help, and we parted cheerfully.

          There were a number of decisive points at which, if I did nothing, if I sat still, peace, like a frightened kitten, would return, but if I did something, took the contemplated step, wrote the letter, asked to appear before the committee, I would have to face the next problem, more deeply and inextricably involved. And eventually, retreat would no longer be an option. The visit to Kerr was a first tentative step; I had indulged an impulse but had not yet assumed a commitment.  I cannot explain the movement from having an idea about how things might be, to actually trying to do something about it without evoking the compelling powers of discipleship, of hero-worship, of sheer stubbornness and pride.  I was driven by the desire to vindicate the educational vision of Alexander Meiklejohn.  Of course, I got no encouragement or support in this venture from Meiklejohn himself, although he was still living in Berkeley when I made the opening moves.  I now think that I was obtuse not to realize that he must have had deep reservations about my project and that, had I asked him, he might have advised against it. (What, I wonder, would I now say if an old student told me…?) But it never occurred to me to ask, and he died before the Program came into being. I mention all this to acknowledge that, as is so often the case, behind the public proposal lurks a private passion.

     I would need to do three things to bring the program into existence.  First, I would need to draw up an educational proposal that could win the approval of the appropriate faculty authorities. There had to be something on paper a committee or a Faculty could consider and judge academically legitimate or respectable or desirable.  This would pose some problems since I was asking approval not for the usual single course in a traditional departmental subject, but for a non-departmental offering worth the equivalent in academic credit of about sixteen out of the twenty semester courses normally taken in the first two years.  Beyond the enormity of that, I was unable and unwilling to do more than offer a brief sketch of the plan.  I did not propose to spend time working up a detailed syllabus to offer to a committee for its approval, not only because I found the task uncongenial, or because I shuddered at the thought of opening myself to the scrutiny of what I felt, correctly, to be the hostile academic mind of which, in educational matters, I had a rather deep distrust, but because, for reasons I now turn to, it was impossible. What was needed was a formulation clear enough to give a fair idea of the plan and vague enough to allow for a wide range of discretion as we went along.

     I needed—this was the second of my three tasks—to find or recruit colleagues.  In the intuitive groping for form I had settled on about 125 to 150 students as a good number.  And since I did not want an experiment that, if successful, could be dismissed as “too expensive”, I thought we needed a faculty of five or six—something like a ridiculous 25-1 ratio.  So I began to look for four or five colleagues.  Who?

     It was clear, to begin with, that it would be foolish to consider anyone without tenure.  We needed regular tenured Berkeley faculty members whose respectability would do something to fill the gap left by the sketchiness of the proposal.  On the other hand, we needed teachers who were bold or reckless enough to step out into a wilderness unmarked by reassuring disciplinary signs.  Respectable adventurous teachers are not people to whom you can hand a syllabus someone else has worked out.  Normally they are the masters of their own courses. If they enter into cooperative ventures at all, they do so as “colleagues.”

     My problem, then, was to find colleagues; and that turned out to be extremely difficult.  I could not simply approach someone as if with a tabula rasa. Should we make up an educational program?  I wanted people who liked the general idea and were willing to work out the details together as we went along.  But acceptance of the general idea, as I described it, was a primary necessity; there were some curricular and pedagogic givens–or I would not have bothered with the whole business. And this created a situation about which I was quite uneasy.  I was clearly the prime mover and conception guardian, but I was trying to find full colleagues—I was quite romantic about collegiality—who would be happy to implement the plan.  It was to be our program even though it was really my idea. As you can see, I was a bit naive.

     Most of the people I approached were not interested or available.  They were fully occupied with their work, had all sorts of plans and commitments, wouldn’t think of taking two years off to go slumming outside their own fields, were vaguely puzzled that I would, but no, thanks. I hasten to say that I do not criticize them. They were fully occupied with research and teaching and university service, they were doing good, even distinguished, work and there was no reason for them to stop in order to do something they didn’t believe in doing or didn’t think they could do well. Nor do I really object to the fact that the Berkeley faculty is what it is—a group of hard working, self directed, high powered, research and graduated school oriented professors—clearly reflecting its primary function.  But there is a lower-division, and I thought we could afford, there, a daring attempt at a different form of significance—staffed by its regular faculty, not by a group of lower-division teachers not up to regular or peculiarly Berkeley standards.

     I had great difficulty recruiting, and might well have had to give up.  But in the end I found two who could come in, but only for a single year, and two who could come in for two years—enough for the launching.  We decided to settle for a group of graduate student teaching-assistants instead of trying to find a sixth faculty member.  The process was even more complicated than it sounds.  It was a sort of juggling act.  I couldn’t really push for program approval until I could point to a faculty.  I couldn’t tie down a faculty member, get him to change his plans and arrange leave from his regular duties in his department, unless the program was given an academic green light, and that was far from a sure thing—in fact it was downright unlikely. And finally, nothing could be done without a budget—salaries, space, staff support, and all that.  And it was hard to arrange that without having done the other things first—which could not be done first without budgetary assurances. And I was in a hurry; I did not intend to get bogged down in “planning;” it was “next year or not at all.”

     In the end, budgetary and other matters depending on the Administration turned out to present no problems at all.  The Administration was invariably and ungrudgingly helpful, granting every request (of course I made only reasonable requests) and easing every difficulty.  Academic approval was, I think, the greatest hurdle.  I won’t trace the complicated process, but one scene persists in fond memory.

     It was a meeting of the College of Letters and Science at which I was to present the plan for approval.  I had distributed a couple of pages explaining the Program curriculum.  Sketchy, of course, and not very deeply analyzed.  I elaborated a bit and took questions.  Then up rose a stalwart old-timer, an old-world social democrat whom I greatly admired for his stentorian defense of freedom and virtue in past academic battles.  “I see,” he boomed, “that you start with the Greeks.  Very good.  Then you jump to seventeenth century England.  Also very good.  Then you go to early America and then to present day America. Good! Good! So it is a historical program, is it not?”

     It wasn’t, but I hadn’t figured out quite how to describe it.  I began something like “not exactly” or “well, sort of, but…” But he would not be denied.  “A historical program!  But look at the gaps.  Full of gaps.  For a historical program too many gaps!” He smiled at me, pitying, benign.  “Too many Gaps. I will vote against it!”

          Someone came to my aid.  “Isn’t it really just a study of periods of crisis, of revolution—that’s it, a study of revolution?”

          “I suppose so,” I mumbled gratefully—a mumble I would pay for later.

          Then rose a very bright young Professor, greatly admired for having introduced the phrase “academic oatmeal” into faculty Senate deliberations, to ask whether I was open to suggestions.  “If I started considering all your good suggestions,” I said tactfully, “I’d end up where we are now.  So I guess it’s take it or leave it.”

          Naturally, after that brilliant defense, it was approved. I’ll skip further harrowing details; in a relatively short time we were all ready to go.  But before I get back to The Iliad I want to take up several things that combined, as it turned out, to make my life miserable, taking all the joy out of the first two years.

          There was the House.  It was obvious that some sort of physical center was needed.  If students were to interact in a common program there had to be someplace for them to meet.  Space was scarce and we considered, as a last resort, taking over a student dorm.  Apart from whether that would have been possible, we were not sure it was a good idea to add the problems of residential separateness to those of distinctive curricular eccentricity. In the end we considered ourselves lucky to be able to capture an abandoned fraternity house on the edge of the campus, and it was patched up and sparsely furnished for our use. Rooms were fixed up as offices for the faculty, a few as seminar rooms.  There was a Program office, a large reading room, a great hall.  Not lavish, but adequate.  I suppose it was because I began with such high hopes that I came to detest the very sight of it.  I had dreamed that it would be the lively center of our life, a place you could drop in to at any time and find students and faculty working and talking… Well, I am not, a quarter of a century later, going to allow myself to feel again the disgust at the ugly culture that came to dominate and to mock the university conception of civilization.  What should have been part of an adult university became a juvenile counter-culture hangout. I felt responsible for the existence of the House and felt guilty at my betrayal of my university colleagues who had trusted me to conduct, in their name, an experiment in liberal education.  For the first two years, the sight of the House made me sick.

     Contributing to the discord was a decision we had taken, about which the faculty had had its first disagreement. When it appeared that we could not find a sixth faculty member we decided, as I said, to take on five graduate student teaching assistants.  There was to be a great deal of writing and we thought we would need help in reading and discussing student papers.  There was never any thought—we explicitly rejected the thought—that the TAs would assume full or general faculty roles. We were seeking assistants, not colleagues. We invited applications.  There was a complication.  Usually a TA is a graduate student working for a PhD degree in a department, assisting in an area, a discipline, in which he is working, getting experience in his own field.  That was not possible in our “non-disciplinary” program and we worried about diverting a graduate student from his primary work in his home department, but we concluded that if the TA was kept narrowly to reading papers and discussing them with the students, the experience would be a good one and not too distracting.  In the process of selection it became clear that a graduate student very active in the student movement had virtually managed to wring an utterly unauthorized promise from our poet.  Trying to forestall this, I had, in turn, gotten the assurance that no commitment would be made.  I did not want him because I had seen enough of him to conclude that he was a pretentious militant who would not accept an “assistant” role and would dedicate himself to bringing the revolution to the program.  I thought he would be uncontrollable and destructive and that we would have enough problems without this one. So I explained why I was against taking him on.  His faculty “sponsor” admitted the danger, but said he had indeed promised and that he would undertake to “control” him. I knew he would be unable to do that and I was adamant.  What to do?

     Since this was a rather fateful turning point, I must explain that we had no formal structure of authority.  We had no chairman, no director, no head, no CEO. I happened to still be chairman of the philosophy department, but that had absolutely nothing to do with the Program.  I never had a Program title but had drifted, not unnaturally, into being the one who had to sign things.  Whatever I may have thought, I religiously refused to let the words “my Program” cross my lips or even emerge from between clenched teeth.  I was, as I have said, Romantic about collegial equality. But a decision needed to be made and, oddly enough, we had no way of doing so. I felt strongly enough about this matter to brood, over a week-end, about simply asserting a veto power, but I didn’t think the program would survive such an act and, against my better and bitter judgment agreed to abide by a majority vote. I lost, of course, 2-3.  I never forgave the triumphant three—two of whom knew they were only to stay in the Program for one year but still had no qualms about violating the obviously appropriate principle of consensus.  Needless to say, my worst fears were quickly realized, and in a few short months the entire faculty agreed that we would have to work without TAs, although the damage had been done and the first Program was in something of a sullen, alienated shambles.

     But while the problem of authority manifested itself first over an “administrative” question it underlay the Program more fundamentally and, because of its intrinsic nature, in ways not generally present in the college at large.  The Program attempted to establish an intellectual community, a “college,” and it conceived of such a community not as a collection of persons living in the same place, or rooting for the same team, or, as Clark Kerr once said, united by a common grievance over parking, but as a group of persons studying the same thing.  We had a required curriculum that lasted for two years and we were all to go through it together—reading, writing, thinking about the same works at the same time.  So to begin with, there had to be some curricular-determining authority.  Obviously, the “faculty.” The Program did not share in the increasingly popular view that a student’s human right to participate in the decisions that effected his life extended to his voting on the reading list or deciding whether, indeed, he would write an assigned paper.  But usually , where the Course is the unit of educational life, the individual professor is in authority, determines course content and method, and works out a modus vivendi with his students. In the Program, no single professor was in authority, could not do as he pleased about those things normally subject to his pleasure, was not free to exercise his discretion, let us say, in modifying or changing assignments.  Faculty and students alike subjected the Program to centrifugal forces that could all too easily have destroyed its unity, its character, its very excuse for existing.  If, for example, we had decided to raise the problem of obedience to law by reading Antigone, it was not up to one of us to decide to read Billy Budd instead, or even in addition.  We might entertain an argument that Billy Budd was better than Antigone and that we should all read it instead, but we wanted all students to be studying the same thing.  If faculty members are free chose their own variations they will do so in preference to arguing about the best common decision, avoiding the most fruitful kind of educational discussion—apart from destroying what is common in a supposedly common enterprise.

     Or, if a student, living at his own unique rhythm, wanted extra time to complete a paper due, for good arbitrary reasons, on Friday, he needed to be told to get it in on time, that we did not want a better paper later, that we wanted the best he could do by Friday, that there was no such thing as a “late” paper, that he was, after Friday, to be starting on the next assignment, not to be alone and palely loitering with the old..

     Or again, a student will announce, after The Iliad (which the student may have been reluctant to read in the first place) that he now wants to devote his life to the study of The Epic, and would like to be excused from Thucydides and all that in order to work on Beowulf and Burnt Njal and Gilgamesh and Aeneid and Morte d’Arthur, etc.—and is stunned when he is told that if he wants to stay in the Program he will do the Program work and that if he wants to write his own ticket he can leave.

     I need hardly point out that all these—and other—tendencies to fly apart, to take our separate amiable ways to salvation, come clothed in attractive educational or metaphysical garb.  The enemy is not the power of brute anti-intellectual inertia; it is the romantic, individualistic, consumer-oriented view of reality with which we have perforce become well acquainted.  Under some circumstances it carries the day—“nothing,” I used to say, “is as irresistible as an error whose time has come “—and it was sweeping the American campus even as we tried to establish a small island of sanity.  But to protect the Program required the systematic and constant assertion of authority.  Its common character had to be protected against the tendency to fly apart.  Someone had to say “No.”

     Oddly enough, after my defeat in our one and only vote, it was as if by general consent, without comment, I was left in charge.  I made a few unsuccessful attempts to develop a genuinely cooperative way of life.  An attempt to establish a faculty dinner meeting once a week was abandoned after a single farcical meeting.  The assembly meetings, because, I think, of faculty reluctance to perform without shining, fell apart. A small student faction took over the House and drove most students away.  Some faculty members became “cult” or coterie figures, subtly shielding students from my tyranny.  And I sank more deeply into the dictatorial or authoritarian role. Anguish at a distance is, I find, essentially Comic, and I am now faintly amused at what once tormented and enraged me. I remember lying awake nights reviewing the twisting path from Clark Kerr’s office to yesterday’s ordeal at the House, resolving that I would not, after everything I had gone through, abandon the program to the irresponsible views or impulses of those who would turn the Program into a caricature of the elective system that had reduced American college education to the mediocre joke against which the Program was to stand as a fruitful alternative.  If the Program had been my idea, the mess was my fault; I would fight it through, and, after the first two-year run—I could see no way of salvaging it—try again with a different faculty.  And in the meantime I, who was still a card carrying member of the American Civil Liberties Union, a veteran of the loyalty oath fight, a Meiklejohnian extremist in defense of the First Amendment and, for that matter, a deep rebel against the practices of the educational establishment, slipped without a murmur into the role of wielder and defender of authority.

     It was, of course, necessary.  For example, when even those who had voted for my TA bane had had enough of being undermined and agreed unanimously (the last straw for the sponsoring Poet was overhearing the advice given to a student who had emerged from a tutorial session.  The Poet had requested an exercise in the rewriting of a short paper.  “Don’t do it,” we heard the TA urging, “don’t do what he says. Just do what you want…”) that the TAs should all be allowed to finish the year but not be reappointed, I said that they should be informed in time to apply for appointments in their own departments.  All agreed.  As the deadline approached, worried that if they were not notified they would have a legitimate complaint, a claim to reappointment, I kept reminding the faculty.  But weeks passed and they were not notified by their strangely reluctant supervisors.  Finally, I told the secretary to hand each TA a letter, by me, informing them that they would not be re-employed.  Naturally, at the next assembly I was handed a petition by students requesting that the TAs be reappointed, and naturally I said “no.”  (I admit it may have been tactless of me, standing at the podium, to do what I usually do when presented with a bit of student writing—reach for a pencil and start correcting…)  I did not bother to embarrass anyone by stressing that it was a unanimous faculty decision, and no one came to my support.  There was an uproar, but I did not budge. Nor explain.  What was there to explain? That this was a counter-revolutionary putsch?  Or that in a Program that did not allow students to determine the curriculum students had no role in choosing their teachers?  I had to go East for a conference on education—ironically, to explain the Program—and when I returned I found that all the furniture in the great hall had been piled into a pyramid that reached the ceiling. The deserted house echoed to my steps.  I did nothing and the pyramid gradually eroded.  We staggered through the year. For the second year I was able to get some faculty replacements and we managed a sort of weary truce.  Quite a few students dropped out and into the regular university across the street, many unwilling to put up with the turmoil.  Most of the most active of the rebellious students stayed on, naturally, manifesting their own deep loyalty to the Program.  I should say, lest this general complaint misleads you, that on the whole the students were intelligent, energetic, imaginative, and with a strong sense of integrity.  Also, in spite of everything, full of charm and promise.  I really remember them with pleasure.  I owed it to them to have chosen a faculty less beset by vanity and insecurity. And I owe it to the TAs, also, to acknowledge that, on the whole, they were well-motivated and helpful.  After the first year we worked without them, and the decision to do so was a wise one.  But not because of ideological or personality clashes or because they were poorly chosen but because of something deeper.  Bright graduate students are at the stage of their careers at which they are most technically involved in their disciplines.  If they teach, that is what they should be teaching.  They should not be thrown into a non-disciplinary arena where they cannot use what they are in the process of mastering.  It is no reflection on their intelligence or teaching talent to say that are not, at that stage of their careers, ideally suited for ventures in liberal education, however attracted to it they may be.

     This has been a longer excursion than I had expected into the institutional background of the Program. It is time to return to a consideration of what would justify all the trouble.  What were we to do with, to make of, The Iliad?  We had all read it during the previous summer (or rather, “reread” it, since I learned that you do not ask a Professor if he has read one of the obvious classics, you ask if he has reread it recently).  Obviously, we did not expect our Engineer to focus on the fortifications of Troy and the defenses of the beached fleet, the Poet to focus on the Homeric art or do an Auden on the Shield of Achilles, the Political scientist to lecture on government on the Plains of Troy, the Lawyer to enlighten us about Agamemnon v Achilles in re Briseis, the philosopher to pontificate about Zeus, fate, and freedom.  But what? We were rather nervous.  I remember the Lawyer complaining privately to me with a hopeless shrug, “What’s there to teach?  There are no arguments to analyze!” He cheered up when I suggested that the Thersites episode could be seen as a free-speech class-struggle case.  Well, it’s really a wonderful book and you might want to reread it if you haven’t done so lately.  But teach it?

     Years later I was a guest at a gathering of St. John’s faculty—a highly accomplished group of teachers—and strolling to lunch I listened to a senior member fondly extolling his own old teacher.  “My life changed forever when he walked into a class on Homer and asked his first question !”  I broke in eagerly, afraid he might not explain, to ask what the question was.  “Oh”, he said, surprised, as if the answer was obvious, “What was Achilles like?” I remember my surge of pleasure at his reply.  Of course!  Nothing about the profound significance of the Homeric world view and all that. What was Achilles like, and Hector, and Helen, and Agamemnon.. .and off we go.

     By instinct or by some happy accident we decided that at the first assembly—“lecture” — each of the five faculty members would simply read out the passage in The Iliad that appealed to him most, with perhaps a brief remark.  It should make an interesting, revealing, provocative opening exercise, encouraging students in a similar venture, sharpening the intensity of their reading.  I still, after twenty years, remember it vividly.  The Engineer who was also an elected city official, a practicing politician, focussed on the futile attempt of some Greek leaders to lure Achilles who was, as we know, sulking in his tent, back into the struggle, and marvelled that here was a man with a grievance who, unlike most political leaders he knew, simply couldn’t be bought, wouldn’t compromise, had no price… The Poet movingly deployed the scene in which the aged King Priam was reduced to pleading with his son’s killer for the return of his son’s body. The Lawyer, of course, read the Thersites bit with great passion, his voice ringing with indignation over the fact that Odysseus would simply strike the only man who dared to question the value of the war at a public meeting while fellow soldiers laughed and applauded Odysseus, ridiculing their own spokesman, as blood trickled down Thersites’ back and a tear ran down his cheek.  I read the long passage in which Hector, awaiting the furious approach of Achilles and almost certain death, toyed with the possibility of avoiding battle, yearning for the bygone days of peace, resigning himself to his doom in a soliloquy unmatched, I think, except for Satan’s on Niphates in Paradise Lost.

     Well, the The Iliad is full of great things and we did our best reading our Rorschach bits.  Except for the Political Scientist. He stood up, opened his book to the first page, read the opening sentence, turned to the back of the book, read the last sentence, opened it somewhere near the middle and read a random sentence. Then he said, “It’s an organism.  Wherever you cut it, it bleeds.”  And resumed his seat.  Point, set, match. A sharp collective intake of breath from the assembled students. Hail the victor!  While I thought, “The S.O.B.  He’s staked out his position.  The Program Rebel.  Not for him to do what we had agreed to do. Even in a Program itself in rebellion he is more rebellious still.  Pompous drivel, but appealing. Impressive. He is going to play the Pied Piper and steal the children…”

     I am quite aware that by even mentioning this episode I invite you to think me over-sensitive, jealous, unbalanced, disturbed by what I should have merely smiled at.  I did, in fact, merely smile.  I did not say what I thought.  But what I thought was utterly correct, prophetic.  I saw a subtle breaking of faculty discipline, an “individualistic” act, an invitation to the battle of vanities.  The Political Scientist had clearly gotten off to a good start and was never headed. The radical lawyer, thenceforth, played the even more Radical Lawyer, the romantic poet the even more Romantic Poet.  Only the mathematician was unaffected, full of down-to-earth common sense and aware that there was no place for him in the dance of prima donnas.  As for me, my role was unmistakable. I was the symbol of authority, of the establishment, the doomed defender of the flawed system (Aha! Hector!).  I must have found the role congenial, since I sank into it easily and seem to have been playing it ever since.

     I will let this odd episode stand for the many ways, subtle and not so subtle, in which the conflict between the tendency to fly apart into autonomous journeys and the insistence on a common path found expression.  Obviously, the easier course is to abandon resistance to entropy or the death-wish and allow everyone—faculty, and perhaps even students—to go their own ways, pursuing their own interests.  But the whole point of the Program was its commitment to a special kind of common intellectual life that by its very commonality nourished a deeper individuality. There was no reason for our existence if we were going to recreate the free market that generally prevailed in the University—autonomy for professors, elective options for students.  It took a constant and active assertion of authority to counter the tendency to degenerate into chaos.

     In a jointly taught program, the unity of the faculty on certain questions is crucial.  On certain questions.  I want to make it clear that vigorous faculty disagreement, open, prolonged, heated, is essential to the vigor and success of the enterprise.  I used to say that we must agree on “constitutional” questions in order to disagree on “legislative” ones.  Perhaps I should say that we must agree on procedural matters in order to be able to disagree fruitfully about substantive questions.  To agree to read a book is the necessary prelude to significant disagreement about it.

     But the distinction between procedural and substantive is not always clear and I will refer to a controversy that seems to have made an indelible impression on those who witnessed it.  We had agreed to read Hobbes’s Leviathan.  The reason for doing so—although there are many reasons—is that Hobbes makes the fundamental case for respect for political authority as the alternative to a life that is, as everyone has heard, “solitary, poor, nasty, brutish and short.”  You can quarrel with this formulation, but it is close enough to remind you of what the general question is and, I need hardly remind you, of its special appropriateness for the world in the mid-sixties and of the opportunity it offers to bring the discussion of urgent questions from the street to the classroom.  Whether you agree with him or not, Hobbes is formidable and worth, educationally, grappling with.

     The Political Scientist was to do the introduction.  Imagine my feelings as I heard him say that Hobbes was very powerful, an overpowering writer, but that his doctrine was pernicious.  If you followed the first step, you would be trapped by the argument (not true, by the way).  So you should not pay attention to what Hobbes says, but only notice his rhetorical artistry, read it as a literary critic would.  But pay no attention to his argument; don’t try to grapple with that.  Instruction, in short, about how not to read a book we were supposed to read.

     I, of course, rose to make an unplanned rejoinder—to the effect that the reason we were to read Hobbes was so that we would have to deal with an argument—a desperate message sent across three centuries from the midst of a terrible civil war—not to enjoy, unmoved, a literary and rhetorical gift.  I may have lost my temper and revealed a small fraction of my contempt for the mind of an educator capable of making such a statement.  I refrain, even now, from saying what I think.  Well, the moment passed, but from that moment working in harness was impossible and we could, at best, barely tolerate each other’s presence.

     There are some lessons to be drawn from this episode.  I begin with the reminder of the very oddity of the possibility of its occurrence.  In the ordinary course of events he would be teaching his own course in the political science department and he might or might not, at his discretion, include Leviathan and deal with it as he thought best.  I would be teaching a course in the philosophy department, might choose to use Leviathan, and would deal with it as I thought best.  We would neither know about nor be in a position to interfere with each other’s conception of what to teach and how to teach it.  It could even be argued that with each free to follow his own professional judgment the best teaching would result.  Certainly, there would be less conflict.

     But in a program, as distinct from a single-teacher course, certain problems force themselves on to the agenda.  I can no longer continue to use, or use in a special way, the books that I, for some reason or other, find congenial or fruitful or simply reassuringly familiar.  I must propose something to colleagues who have their own preferred lists.  We must make a case for what we do; we will argue, sometimes bitterly, since the stakes are, in spite of superficial appearances, quite high.  And we will have to decide, if we are to continue together and not settle for each going his separate way, about a significant range of educational problems that may seldom, in other circumstances, come in for serious consideration at all.

     So that once the “private” course is abandoned the teacher finds himself in a transformed and problematic world, without familiar landmarks and accustomed usages, naked to his colleagues, forced to justify his conception of the teaching art and even to change his practice. The common program is a cauldron for the brewing of educational insight, and I use this image fully aware of its evocation of Medea—some promise of rejuvenation, some destructive dissection, lots of heat. The difficulty is this:  on the one hand, involvement in a common program is the great device for forcing attention to the essential and neglected problems of education. On the other hand, collegial life in such a program can be so searing and demanding that one must doubt whether, except for a short time and by a happy accident, such a program can be institutionalized—except by heroic efforts and unusual commitment to a mode of educational life whose very point an outside observer is hardly likely to discern.

     But I must turn to other matters—although there is more to be said about the virtues and difficulties peculiar to “programs” as against “courses.”  Curriculum!  In my missionary years I used to point out that the Program had two distinguishable aspects: its non-course pedagogic structure, and its completely required curriculum.  I would make the point that the structure had its own virtues and could be adapted to a variety of situations and was not tied peculiarly to our curriculum.  I thought, especially, that it would be easy and useful to try it for an upper-division departmental major, unplagued by our special personnel and non-disciplinary features.  And, of course, I had to grant that even for purposes of lower-division liberal education the world did not have to begin with The Iliad and proceed through Greeks, Jews, and Englishmen to Henry Adams and Malcolm X as we were doing.  The form did not entail any particular content.  It did mandate a common required curriculum, but it did not require this particular one.

     Nevertheless, I did and do have a special attachment to this particular one, although my defense of it has tended to be a bit diffident.  In my eagerness to convince others of the virtues of the Program I might stress the structural pedagogic features and might even push my view that a liberal education required a broadly “political ” curriculum—the cultivation of the sovereign mind.  And although I would offer our curriculum as an example, I shied away from defending it as anything more than a contingent option.  I was doing, in short, what we tend to do in academic life—avoiding argument about curriculum.

     I tend to think (mistakenly) that “required”  applied to “curriculum” is redundant, but in the world of the contemporary American college it is merely anomalous.  If we must have requirements—from time to time we are shocked into saying we must have some—we try to have as few as possible.  To the assertion that all students need this the rejoinder may be that they also need that and the other.  You can’t require too much so you must decide which.  Everyone should certainly have a basic course in American history!  Yes, and for that matter, in world history too.  After all!  It’s terrible how we are turning out monoglot English language chauvinists.  Everyone should be required to master a second language! And how about math? It’s the language of science, and look at the Japanese!  And our scientific illiteracy! And our computer illiteracy!  and our literary illiteracy!  And our ethnic ignorance! And our sexism! And can we really give anyone our degree without teaching him some economics?  And isn’t there something about philosophy, or ethics or values…ah, yes, almost forgot that…

     Obviously, we can’t have everything and we can’t easily agree about what to require, so we end up about where we are.  We agree, shaking our heads sadly, that High Schools should have prepared our students better, but now that they are here, apart from a bit of remedial work, we offer freedom and pluralism. That is, we offer our students “freedom” to choose, and justify our own irresponsible reluctance to impose requirements as “pluralism.”  A feeling of weariness steals over me as I face the prospect of arguing about that most dubious of freedoms, student elective freedom, about treating the student as a customer or a consumer who, presented with a rich catalogue containing a myriad of courses, is supposed to know what he wants or know what he needs. “Elective” should be a vacation, not a way of life.

          Suppose we consider the tension, the interplay, between what a student chooses to do and what he is required to do in the course of his undergraduate college education—a more pervasive problem than is suggested by “electives” and “requirements.”  We may begin with the recognition that his very presence is an ambiguous mixture of freedom and necessity—of wanting to be there and having to be there if he wants to have a certain kind of life.  It is important to recognize, that the normal American student is in college because it is the normal place to be at that time of life, not because he is driven by a thirst for the higher learning, by a desire to be a professor, to spend his life within earshot of the bells of the Ivory Tower.  His presence is “voluntary”, but in a Pickwickian sense.  Let us say then, that the student presents himself, enrolls in, chooses to enroll in, the College of Letters and Science still undecided, as he is permitted to be, about his or her “major” and subsequent career. What shall he study?  Or rather, what courses should he take? What can he take? what must he take? What does he want to take?

     The burden of choice is mercifully relieved by the existence of some requirements.  If he has not already satisfied our minimal demands in language or mathematics he is encouraged to attend to such matters promptly, and to take the required course in reading and composition at once.  But beyond this, dim visions of the future begin to make their demands. The decision about the major (even about career and life) looms. Students will have to choose by the third year, and will discover that there are “prerequisites”.  That is, before they can be admitted to a particular major they will be expected to have taken some lower division courses in preparation. By this device, some departments have come, with dubious legitimacy, to preempt almost half of a student’s lower-division pre-major course life.  And the danger of not knowing what you are going to major in is that when you do decide you may be delayed or prevented by lack of foresight about pre-requisites.  So, in addition to general requirements there are prerequisites to worry about–courses you must take first if you want to take something else.  And it is often the case that if there is something you want to do there is also something you are required to do along with, in addition to, what you want to do. If you decide to major in Philosophy because you are interested in moral problems you will find that you have to struggle with Logic, which you may not be interested in at all. In fact, every major, in addition to the goodies that attract you, is likely to involve you in doing things you don’t want to do, or at least think you don’t want to do.  (It is surprising how often we find that what we think we want to do turns out not to interest us after all, while what we think we are not interested in turns out to be very interesting).  And not only the major, or the career for that matter, but any particular course will be a mixture of the chosen and the given, the wanted and the required. You chose a course because it involves X and find yourself willy nilly also involved with Y.

     All this is to induce some confusion about the chosen and the given, the elected and the required, in the realm of education.  I do this in the hope of making what I want to say more palatable to readers for whom “freedom of choice” is a primary value.  That is, that the significance of one’s education depends less on the operation of student “choice” at every point than on the involvement of the student in coherent sequential activity imposed by the situation—a coherent sequencing that the student, by virtue of his status and condition—not by virtue of his sinfulness or folly—is generally unable to provide for himself, even aided by the misconceptions of his peers.  The question, then, is not how much “choice” the student has–he will always have some—but what we provide in the way of coherent sequenced intellectual life within the framework of choice—within the structure of a single course or a loosely related sequence of courses, the structure of the upper-division major, the structure of a graduate or professional school program. Obviously, only the first of these is available for the lower-division orphan.  He has only courses.  I used to say, when I was saying things like that, that a collection of coherent courses is still an incoherent collection of courses, and I would still defend my early description of the life of the lower-division student as, perforce, that of a distracted intellectual juggler.

     So it was partly to remedy the fragmentation of attention and energy, especially in the lower-division, that I developed the conception of a two-year Program, that operated not by stringing unrelated or loosely related courses together but by abandoning the very conception of the course and claiming and directing the bulk of the student’s attention for the first two years.  From the faculty point of view, this shift in unit involved the difference between planning a course—which every faculty member does routinely—and planning an education—which a faculty member is seldom if ever called upon to do.  A Program makes such planning both possible and necessary and imposes a frightening responsibility on a faculty more accustomed to assuming responsibility for a course and letting the invisible hand take care of education.

     The very conception of the Program as the significant educational unit called for a common required curriculum.  And since, as I have said, the point of the enterprize was to provide something in the way of liberal education, the”content” was to be broadly and thematically “political”.  But thematic concentration, the determination to do a single thing at a time, had a price—the omission of many important things, and we were always worried by the price and, apart from our own doubts, had to defend ourselves against the charge that we were leaving out too many important things—especially science and mathematics. My own response was to grant the importance, regret the omission, insist that we were not going to do a number of different things in the program, and invite the challenger to show us how science and mathematics could be integrated into the program.  I had, in fact, hoped that our mathematician would solve the problem and suggest appropriate changes.  But his response to repeated prodding was something like “no, not now…” I got a card from him during a later summer, from Greece where he had just enjoyed a performance of the Bacchae.  A p.s. on his card excited me: “Have solved Program science problem!” When he returned, I was waiting.  “What’s the answer?” He seemed for a moment not to remember.  Then to my baffled disgust, “Oh.  Just add Prometheus Bound to the reading list.”

     No one else accepted the invitation, and I am convinced that it cannot be done.  The categories of “science” and of the “humanities” are radically different and irreducible to each other; they are simply different enterprises, both important, and you cannot do both at the same time without doing two different things at the same time.  Of course mathematicians and scientists have a great capacity to make you feel guilty if you neglect them, were not inclined to worry about integrating important nonscientific matters into their teaching( “students should get that stuff elsewhere”) and were prone, in those days, to throw C.P. Snow at you, who, having spoken of Two Cultures, liked to point out that whereas scientists were familiar with Hamlet, humanists were not equally at home with the Second Law of Thermodynamics (alas! no movie yet about the Second Law).  We hardly argue this issue anymore, partly because no one seems to be worrying about “integrating” anything into a coherent educational scheme.

     But in those days I marshalled some sort of diffident defense, quite unconvincingly, hardly convincing myself.  All sorts of people, including students, were sure we should be doing something else.  I will not attempt a defense now.  If you want me to, I’m in the phone book.  I might say that it took me a long time to lose my sense of guilt about science and math—after all they have “become death, destroyers of worlds”—clearly important.  But a few years ago, drawn into recalling the past and giving an account of the Program to a convocation of professors, I was approached over a second drink by someone who introduced himself as a professor of mathematics.  “Why,” he asked—not asked, really, but accused—“did you leave out mathematics?”  “Mathematics? ” I tried to raise my eyebrows. “Why not? “I replied. “It’s not that important.  They can take it somewhere else”.  He stared at me in shocked disbelief.  But I felt liberated.

     So we had, for students who “voluntarily” entered the Program, a completely required curriculum spanning two years—a relatively brief respite from a lifetime of discrete courses—relieving students of the problems of choice, imposing on the faculty the burden of creation.  The starting point was the Athens-America conception that Meiklejohn had developed in the 20s at Wisconsin, but beyond that inspiration we went our own way, adding, as Matthew Arnold might say, the Hebraic to the Hellenic strain by dividing the first year between the Greeks and Seventeenth (more or less) century England—the King James Bible, some Shakespeare, Hobbes, Milton, and on through Burke and beyond.  The second year was to carry us into America—presenting interesting curricular challenges of greater difficulty.  My own inclination was to focus on the Constitution, on Constitutional Law—a sort of Gentile variety of Talmudic studies—(since we are, more than most, a people of the law Book and since most of our problems are transformed, sooner or later, into judicial questions), and Literature as the path to the understanding of the American situation, and if I had to do the Program again I would want to try to do that again, but better.

     Still, I do not want, in this account, to re-argue the curriculum, but only to say how it seems now upon reflection. What can you say about the Greeks except that we are forever in debt to whoever brings our mind, with or without our consent, into engagement with Herodotus and Thucydides, Homer and Hesiod, Sophocles and Aeschylus, Plato…  Greece seem almost to have been created for our enlightenment, and the Hellenic Testament, the story of its glory, its mind, and its self-destruction, the sad long day’s dying, pervades any fortunate Western consciousness. Still, there are some things very close to us that are quite alien to the Greeks. I cannot imagine a Greek Job, and Job, with his anguished cry for justice, is a pervasive echo in our lives. To study the Bible, Milton, and Hobbes is to take a second step towards self-knowledge.  There are lots of things we didn’t do, but what we did in the first year alone justifies the entire enterprise.  Each new reading joined the thickening context of understanding, remaining permanently on the table, a permanent part of the mind.  What I said in a daring moment in Experiment at Berkeley is quite true—that we had discovered a version of the basic moral curriculum of the West.  I seem to be still studying it. I note, with some surprise, that the manuscript I am now completing, that I call The Burden Of Office and think of more informally as Agamemnon And Other Losers, consists entirely of studies of some Program readings.

     Of course, neither the form nor the content of the Program was perfectly grasped at the outset.  We learned as we went along.  While the basic pattern was the same, there was significant difference between the first and the second run.  I have made no attempt here to be accurate about what happened when, to keep the two versions distinct.  But we did have two attempts, two versions. It would be a bit misleading to say that the first program did not work the way I had intended and the second program did—although that is true enough.  I shy away from saying that it didn’t work the first time and did the second, as if we had one failure and one success.  Some of the most interesting things happened the first time, and it may be that turmoil has its own special lessons.  I have come to think of the two runs, as it seems convenient to call them, in terms of two of the novels of C.S.Lewis.  Out of the Silent Planet tells of Tellus, our planet, where the Great Plan was frustrated by the rebellion in Eden.  Perelandra tells of the planet on which the temptation was resisted and the Great Plan worked out as intended.  Lewis allows us, invites us, to infer that wonderful as the great plan was where it worked in all its docile beauty, the story about where it didn’t work as planned was perhaps even more wonderful still.  At any rate, I think of the first run as Tellus, the second as Perelandra.  I suffered through the first and I enjoyed the second, which was, in fact, wonderful as planned.

     Two runs.  If I had thought that the second would be like the first I would simply have written it off, called it off, not taken the necessary steps for a renewed attempt.  But at the end of the second year of the first run the original faculty —those who had stayed on for the second year— returned to their normal pursuits.  Two, as I mentioned, had signed on only for a single year.  It was generally a happy parting, much relief on all sides at seeing the last of each other.  But I was badly shaken. I had expected that the faculty, masters of their own classes, would have some problems working together, all teaching “our” students instead of each his own. I had expected a range of differing insights, a range of skills and backgrounds. What I had not expected was the raging vanity of the charismatic teacher.  The competition of Scholars, the thirst for distinction, was a familiar and almost comical fact of academic life.  But we encounter it, the vanity of scholars at a distance. You published your stuff; he published his; and when he excitedly waved the telegram announcing his award you would say (a famous episode) “What! You!” The scholarly community is diffused through the world, and you usually appeal to it in writing. Its vanity, although sometimes flagrant, is generally tolerable and not too greatly obstructive.

          But the teacher, the aspiring great teacher, is, in a perverted version, the seducer, the enchanter working his magic on a concrete local group.  He must capture it, or he is nothing.  He does not like to share the limelight; the presence of fellow-professionals is intrusive and distracting; he is best at a one-man show; he worries about being upstaged or outshone. I think the conception of teaching as a “performing art” is deeply mistaken, but it is quite popular.  And it makes cooperative teaching almost impossible.

     Teaching is a subtle quasi-therapeutic art, not a performing art.  It is very difficult to observe; it is not spectacular. There is really nothing much to see when you see a great teacher at work… Oh, well, it is a commonplace at the University that we do not know how to really evaluate teaching—which may be why we rely so much on consumer reports.  And, like other professions, we tend to close ranks at this point.  Policemen are reluctant to condemn a colleague for unprofessional conduct; lawyers hate to disbar lawyers; doctors don’t like to disqualify doctors.  As for teachers— we may say someone is a fine or great teacher, or a competent teacher, or a good teacher for small classes… But we don’t seem to even recognize a category of harmful teacher, of teachers who damage minds entrusted to their care.  At this point we close ranks.  Except, of course, when a common program destroys the isolation of the separate classroom and makes “harming your students” also a case of harming “ours” and impossible to ignore. Life in the isolated classroom is obviously simpler.

     I blamed the troubles of the first run on my ineptitude in assembling its faculty. I may have been angry and nursed grudges, but I do not feel greatly justified in blaming my colleagues. They had, after all, merely accepted or succumbed to my invitation and were only doing what apparently came naturally. And I even felt guilty about knowing things about them that I would not have known accept for the special circumstances of the Program, as if I had violated the privacy I had invited them to give up. Except for the Program, I would still, no doubt, consider some to be the strong teachers I thought they were when I recruited them, and it is unfair of me to first lure them out of their happy niches and then blame them for my disappointed expectations.  I am slightly contrite.

          But I was determined to give the Program another trial, and I tried a different approach to “staffing.”  I went outside the Berkeley faculty and called on friends, most of whom I had known since they were graduate students and I was an assistant professor.  They were teaching elsewhere, but were able to get leaves to come to Berkeley for two years —to my rescue, to my delight.

     This move, of course, raised some disturbing questions. Why could I not find regular Berkeley faculty willing and able to take part?  Was I trying to establish a Berkeley junior college below the dignity of what a now-defunct local paper referred to as the “U.C. Savant”?  Did the Program, in that case, really belong on the Berkeley campus?  I was troubled by these questions, but I did not look very hard for another Berkeley staff.  A perfunctory look turned up a few who were not uninterested, “but not just now.” And I was not really interested in searching beyond the circle of those I knew, or thought I knew.  I was quite exhausted and bruised and unwilling to spend the next two years re-arguing basic principles and fighting centrifugal tendencies.  I wanted colleagues who shared the vision, who understood the whole conception, who did not have local charismatic status to defend, whose educational background I had confidence in, who I thought I would enjoy working with, and who I considered to be first rate minds and good teachers. Of those who came to my rescue, some had Berkeley tenure-equivalent credentials, a few were not in that particular race but had significant intellectual and pedagogic virtues.

     The difference was striking. What made the difference? First, I suppose, I now had some experience and was aware of some of the things we needed to avoid.  The importance of “constitutional” agreement was clear from the start—although I had thought it implicit even in the first run.  The underlying rationale was more clearly formulated and was accepted as a constitutive condition of participation.  But to say that is really to miss the main point: we were Friends, and not in a competitive situation.  This is really an embarrassing point. I had thought the five of us in the first group were friends; so what was the difference?  The most obvious was this: I had been a teacher of the core of the second group.  They were in no sense “disciples” or even continuing students, but we had been through the mill together, and knew each other as only those who had been through that sort of mill together can know each other.  Our essential mode was cooperative, not competitive.  We argued—even quarreled —a lot, but we worked well together.

     But the fact that I had surrounded myself with friends of this sort raises obvious questions.  Could I not work in harness with contemporaries, only with those a generation younger?  Was I taking advantage of the deference of former students?  Was the whole search for collegiality really a self-centered hoax?  I am, of course, bothered by the possibility, and it would be unseemly and futile to protest too much, but let me at least say something.  The basic agreement on fundamentals in the second group made possible a vigorous running disagreement on almost everything else.  I was not treated gently about anything—especially about the ideas that ran like persistent threads through the two year program.  And since I did not have to assume the role of Program defender I found myself dramatically less “central” to its life. I had, so to speak, specialized in defending the program, and now that was a function shared by colleagues. I felt, for the first time, that I was one of a band of teachers and, when it came to that, I was not an especially good one.  The others had, I thought, a better sense of the minds of our students, more devoted patience, better particular diagnostic flair and curative ingenuity and, oddly enough, a more single-minded devotion to the task.  For the first time I had the feeling that I was not necessary, that the Program could get along without me, that I could relax and enjoy what we were doing.  I was still, as the only regular Berkeley faculty member, responsible for whatever administration there was, and I was allowing myself to be drawn, marginally, into the turmoil distracting the campus (a turmoil that had surprisingly little effect on the Program).  But the actual day to day life and work of the Program no longer depended on me.

     What stands out in my mind when I now think of the Program is the habit we, the faculty, fell into of having dinner together every Thursday night in a private room at the Faculty Club.  I have mentioned that the attempt to do this on the first run collapsed after a single meeting. But for the two years of the second run we assembled every week over wine and dinner and argued for four or five hours.  We had some fairly firm rules. We would not bring up any administrative matters. We simply had a discussion of the material we were reading in the Program, explaining, interpreting, arguing about the significance of this or that and, as the evening drew to a close, saying something like, “you two seem to be disagreeing about the central point, so why don’t you each take twenty minutes or so to say what you think and get everyone launched at the assembly next Tuesday, and we’ll go on from there.”  Volunteering was frowned on; we were drafted for this or that service and found it very relaxing.

     Often it seemed that the entire Program was a spill-over from this long-running seminar.  We not only discussed the material substantively, we argued about how to use it. Looking back at the first run I would wonder how we could possibly have managed without the faculty seminar.  The truth is, of course, that we didn’t manage at all, and is only by virtue of the uncanny unspoilability of the basic material, the inherent fruitfulness of a dimly emerging pedagogic form and, perhaps, the notorious Hawthorne Effect that anything educationally useful emerged in spite of everything.  But it was not Perelandra.

     I remember the second run seminar as the most exciting, the most significant intellectual and moral experience of my whole life, unmatched, unapproached by anything I experienced in four decades of interesting university life, mostly at Berkeley which is, in many ways (was, perhaps) an academic heaven. The seminar made the Program, and I am sure this judgement would be shared by everyone who took part in it.

     I note that I have written far more about the first run and its traumas than about the second run and its triumphs. Obviously, the first shook me; the second renewed my faith. All my joy in recollection is focussed on the second, all my anguish on the first.  And yet I find myself writing nothing revealing about the quality of the triumph.  It is easier to describe pain than health; easier, as Milton demonstrates, to write of hell than of the joys of heaven. Still, I am a bit startled to find how much the first run dominates this account, how little I do to communicate the quality of the second run.  I do not intend to try to remedy that imbalance now, but only to acknowledge it.

     I emerged after four years reassured that education could still be thought of as the initiation of the new generation into a great continuing and deeply rooted civilization. But this, I suppose calls for some comment.

     It was Berkeley in the middle and late sixties, one of the great centers of the generational uprising.  The wave of baby-boomers had broken over the college, a large cohort especially horizontally or peer group oriented.  The times were stirring and troubled—the civil rights movement, sexual revolutions, the shocking end of Camelot, the war in Viet Nam loomed heavily on the horizon of those who, to the chagrined surprise of their elders, did not remember the Great War that had shaped and tempered the minds of parents and teachers.  They did not remember Munich or Hitler or Pearl Harbor or D Day or reading the headlines the day after Hiroshima, the surge of relief at calling off the million-corpsed invasion, the homecomings to triumphant and shattered worlds. They could not remember what their parents could not forget, their minds could never really meet—the one proud of the triumphant American expedition against the grim Rome-Berlin-Tokyo Axis, the other ashamed of the muddled, ambiguous American expedition to a strange periphery of Asia. But the parents saw them off to college and they arrived full of contempt for the world they never made, for racial and sexual injustice and hypocrisy, angry at having had to hide from the radiant fruit of science under schoolroom desks, enraged at the “unjust war” that had been doled out to them, without a confident religion, without a glowing political ideology—the scene was littered with fragments—with its own irreverent music, with the temptations of a shortcut to the expansion of consciousness, and armed somehow (god knows who taught them that!) with the powerful philosophical conviction that no one knew better than you what was right or good or even true “for you.”

     Only a vigorous imagination can begin to grasp the enormity of trying to initiate the class of sixty-something into an ongoing American branch of western civilization. They gave us a house to try it in. It was the battleground, but I’m not sure everyone recognized what the battle was all about.  It was to see whether our traditional cultural resources were powerful enough to withstand the contemptuous challenge of a despairing counterculture.  I suppose that sounds grandiose.  I think back to the House in its seedy disarray, half deserted, a handful of disgruntled students arriving for a dispirited seminar, or, again, to an argumentative throng, unexpectedly cheerful about something or other—a confusing sequence of disordered scenes.  I am reminded of the scene in which Stendhal’s hero, galloping away from a trivial bit of confusion, pauses and wonders, “Was that the battle of Waterloo?” So, I ask myself, having crept away, was that really a battle in the war over the American soul?  Without banners?  Without a band?  Yes, it was.  Sometimes it seemed as if the world was struggling to turn itself into illustrative material to accompany the core curriculum of the Experimental Program.  We were dealing, of course, with the themes that swirled about one of our greatest achievements —the creation and development of the great art of Politics.  To begin with The Iliad is to begin in medias res, in the midst of the perpetual war between the Human Expedition and the Human City, between the Quest and the Home.  The tale is echoed or mimicked in the masterful account of the war to the death between Athens and Sparta, the paradigmatic cultures of the marketplace and the barracks, of freedom and of discipline.  Against that background we grapple with the conflicting claims of Olympian rationality and Dionysian passion, with the elevation of Law over Fury, with the defiance of Law in the name of the Higher Law, with the great Platonic depiction of the parallel between Psyche and Polity ranging from the achievement of Wisdom to the reign of anarchy and tyranny in each.  And then in the other of our great moods we contemplate the Covenant in the Wilderness on the road from Slavery to dreams of freedom, and ring a different set of changes on the problems of Authority, Obedience, Rebellion, War and Peace, Justice, Laws and Courts.  And in the end, we come to see ourselves, to find ourselves, to know ourselves, as the present act in an ancient and perpetual drama…

     So, in spite of everything, I emerged convinced that the traditional spiritual resources of the culture, far from being obsolete or exhausted, were, in fact, if we used them properly, the key to our salvation.

     If we use them properly!  And who, alas, is doing that?  The natural University guardians of the great tradition are, of course the Departments of the Humanities.  But, with a few honorable exceptions, they guard the Treasure as Fafnir guarded his—they breathe fire if anyone tries to steal it, to use it, that is, without a license.  Simply put, Departments in the Humanities believe in and practice scholarship.  That is to say, they are not interested in what the people they study are interested in.  They are interested in what scholars are interested in and, generally, the people they study were not scholars.(“. . .Lord what would they say/ Did their Catullus walk that way?”).  I respect what they know, appreciate what they do. I, who know not Greek, live on their scholarly translations.  I read Dante and Virgil in translation; Tolstoy and Dostoyevsky in translation; Ibsen, even Goethe. .. I am parasitic on translators and scholars.  I say a heartfelt “Thank you”.  But they —“Humanists”—are obstacles to the non-scholarly human use of their work.  They almost always vote against efforts like The Experimental Program.  They usually consider people like me to be ignoramuses and dilettantes. They scare me, (something easy to do to an emeritus professor who still thinks he was hired by mistake) but they kill significant liberal education. Fafnir! I am a bit surprised to find myself so bitter against such nice people, but I’ll let it stand… They, more than anyone, are responsible for the feeble state of liberal education in America.

     But now I turn from the central curricular problem —I had intended to attempt an extended eloquent exposition of the curriculum but on reflection, I don’t see why I should—to another pervasive aspect of the Program.  Even as we were reading and arguing about freedom and authority we were also involved in an aspect of that problem as it colored the daily style of life in the Program.  Let me begin with an anecdote. A few years ago, long after the Program had gone out of existence, I was invited to a reunion of graduates of Meiklejohn’s Experimental College on the Madison campus.  Meiklejohn had died, but the faithful still gathered. I was invited although I had not been one of them as a student, and I spoke to them about Meiklejohn’s later years in Berkeley.  At one general session a half dozen of the alumni (it seems strange to call them that) spoke in turn, reminiscing about life in the good old days in the Ex College.  At the end, the chairman asked me, sitting enthralled in the audience, if I had any questions or wanted to say anything.  Yes, I said, I have one question: “Did you do your work?” Someone, a bit taken aback, launched a conventional affirmative reply but was interrupted by a tumult of denials.  No! no! not a lick of work for two years! too young! too much freedom!…  It was a rather bitter outburst, a pent up moment of truth, a half-century old complaint.  I actually do not remember whether I really said or only wish I had said what I do remember thinking. “Here we are honoring the memory of a man fired as President after bringing Amherst back to life, summoned to Wisconsin to create an educational utopia.  He struggled against enormous odds, gathered a faculty, fought with the establishment, forged a novel curricular conception, investing a great mind and soul in the effort. And then you arrive, saunter off to taverns, and have the unmitigated gall to not do your work!”

     Long before this episode, at one of the Program Thursday night final dinners devoted to a review of our problems, one of us launched into something like a complaint about how we were breaking our backs in our efforts while many students were just loafing.  We were taping these Review sessions (I still have the transcript). I am recorded as replying “Doctors always work harder than patients”.  The transcriber unexpectedly notes “Silence. And then laughter”.  Silence, and then laughter.  What else?

     There is a problem about freedom and coercion, impulse and habit, autonomy and shared ritual, in education as elsewhere.  If we can state our objectives as the cultivation of certain habits of mind—whether stated grandiosely as the habit of rational inquiry and deliberation or more diffidently as the habits of careful reading, analysis, expression, discussion —we must decide about the uses of discipline in the process.  It will come as no surprise that I not only believed in a required curriculum but that I also believed students should be required to do, should acquire the habit of doing, the work.  But requiring something and getting someone to do what is required are two different things.  The problem was to get our students to do what we thought they should do.

     To begin with, for the familiar range of idealistic reasons, we denied ourselves the usual array of sticks and carrots.  We decided not to have examinations or to give grades.  The university had just begun to experiment with a pass/not-pass system and we were given permission to use it, stretching its limits a bit.  Everyone whose performance did not merit expulsion from the Program was simply given a “pass “—a grade that would not enter into determining his subsequent grade-point average. No one was ever expelled from the program for any reason other than serious performance delinquency.  We put up cheerfully with intellectual inadequacy.  I should say, on this score, that our students were good enough to have been admitted to Berkeley, but were not admitted to the Program on the basis of any special distinction.  We did not want to run an “Honors” program; we wanted as typical a group as we could get and simply chose haphazardly from a large number of applicants.

     There are situations in which there are examinations and grades and in which the student is told that he can do as he pleases about attendance and all that sort of thing.  He will be tested and judged; how he prepares is his business.  We were, I suppose, at the other extreme.  There was no terminal exam to prepare for; no grade to certify anything.  What we insisted on instead was that the student be there, with work at least more or less prepared.  What we could not tolerate was no exams, no grades, and no “prepared” presence. Essentially, to stay in the Program meant to be there and to do one’s work.

     This choice of a mode of operation created many unfamiliar problems for us.  The faculty had disarmed itself, put aside the usual disciplinary weapons.  Non-attendance? We’ll catch him on the Exam. Sloppy work? Do you want a C or D?  It’s all so easy, so familiar.  We wanted something better and discovered the challenge, the difficulty, of developing other modes of intellectual motivation and student-teacher interaction. I had a great skiing instructor whose diagnostic and instructional technique had impressed me as a paradigm of the teaching art. I tried to imagine him watching me turn my way down the slope and then saying ” C+.  Next time try to ski better!” But I have seen instructors handing back a paper with the notation “C.  Try to write more clearly”, and then arguing with the student about whether the paper really deserved a B.

     We wanted something else. We wanted habitual prepared presence because nothing much could happen without that. But we wanted to improve the quality of intellectual activity.  We wanted to find the useful thing to say about a student paper without—instead of— giving it a grade.  We wanted to get the student to work harder and more fruitfully without the prod of grading.  It was not easy and, at times, we wondered why we were making life more difficult for ourselves.  But we persisted in the attempt to provide something other than extrinsic motivation for the exercise of the mind.

     Students sometimes missed grades.  One student announced that she was going to transfer out of the Program.  She was a good student and, had there been grades, would have rated an A. She liked the work, she said, and she thought she was getting a lot out of the Program.  And she really didn’t mind just getting a “pass”.  But she couldn’t stand the fact that her friend down the hall in her Dorm wasn’t doing much, was always going out on dates, and was getting a “Pass” too.  It wasn’t right. Would you like it better, would you stay, if you got an A and she got a C ? she was asked.  “Of course!” she said, and departed for a fairer world.

     Well, the path we chose required the insistence on a timely adherence to a sustaining common routine.  But, from start to finish our performance was sporadic.  The faculty had its Hawks and its Doves.  Students defied expectations and it was difficult to do much about it.  We could expel or threaten to expel, but that often seemed too drastic and was, from our point of view, an admission of failure.  We hated to be reduced to nagging or disowning or, for that matter, to coaxing.  Toward the end of the second run I began to entertain the heretical thought that we should reconsider the abandonment of grades.  Grades don’t really bother the good student; they serve as a prod to the middling; they provide retribution to the delinquent.  I expressed my doubts to my second-run colleagues and was thoroughly raked over the coals.  They are probably right; grades are a second-best device for a second-best world. It no doubt reveals my condition when I confess that the issue does not seem to me as important as it once did.

     Meiklejohn, I believe, was more tolerant of student “independence” or  “non-performance” than I was, and I was inclined to think  he might have been out of sympathy with the spirit in which I was approaching the problem—that a late paper or missed lecture was not a minor failing but a sign that one’s life was fundamentally out of control.  Some students thought so too. The reproach pinned to the wall “Joe, Joe, What have you done to my idea?”, signed “Alec”, was, no doubt, a forgery, but, I was prepared to concede, not a bad one.  Meiklejohn, I believe, had had—had been required to have— grades.

     But whatever the verdict about grades as a motivational and disciplinary device, I am not in retreat at all from the view that a college, a Program, a community of learning is not a collection of individuals each pursuing his own firefly, but a company taking thought together, sharing a common life, a common discipline, a common ritual.  I suppose I prefer the fellowship of the Round Table to the solitary quest for the Grail.

     Towards the end of the second run a committee of the college of Letters and Science looked into what we were doing and made a recommendation that eventually resulted in the College’s giving its approval to the Program on its academic merits while, at the same time, expressing the cautious view that continuation would depend on the availability of fairly scarce resources. I was pleased, since all I wanted at that point was academic approval. I did not know how much was generally known about our internal storms.  Our students were also involved, to different degrees, in the campus life of the time.  I remember being strangely pleased when, during a general student strike they would show up for work, announcing that they were on strike against the University but not, obviously, against us.  The Student Movement was leaving us in peace, practicing benign neglect.  I knew some of the leaders, and while I was almost always opposed to what the Movement was doing on campus I was not terribly active in the fight.  On the educational front, it was generally known that I held to the reactionary view that the faculty was to govern education (my burden was that I defended the faculty’s authority even while I despised the way the faculty exercised it ) and that I had scant sympathy for student participation in curricular matters.  As a matter of principle this was anathema to the Movement, and they could not embrace the Program as a step in the right educational direction. On the other hand, the Program was at least a radical innovation and an expression of the University’s concern with undergraduate education.  So the Movement neither supported the Program nor opposed it.  I welcomed being left alone and would not really have known what to do with student “support,” would have been embarrassed by it.

The university faculty, initially a bit apprehensive that I was launching an undisciplined educational spree, seemed reassured by rumors that I was really trying to run a sort of boot-camp.  An unexpectedly candid report I wrote after the first year evoked many expressions of appreciation and good will and by the time we were well into the second run I felt that the faculty was amiably tolerant, although far from accepting the validity of fundamental conceptions underlying the program.

     The Administration—at least in its higher ranks—continued to be supportive (although I was aware of some hostility at the Decanal level).  We were receiving favorable national publicity as educational innovators, somewhat offsetting the charge that the University was involved in research to the neglect of undergraduate teaching.  Even some Regents, at one of whose meetings a prominent member had, before we were launched, criticized us as planning to teach (incite?) Revolution and had demanded, in vain, a letter supporting Free Enterprise from each of the Program faculty, were, informally, offering encouragement.

     In short, as we neared the end of the second run the auspices for continuation were generally favorable and I welcomed the academic approval of the College as a minor vindication and as a necessary precondition of a move from “experimental” to “regular” status.  But I was undecided about what to do.  The past five years had been exhausting and I needed some leave.  The Program faculty needed to return to their regular positions elsewhere.  Continuing meant gathering a new faculty, and while some of the second run faculty were perfectly capable of running the Program and were willing to continue they could not simply stay on without resolving ambiguities about permanence that could not, at that stage, be resolved.  I realized that to have a break in continuity would be to lose some momentum, but I did not have the heart to scramble to put together a third trial run.  Two experimental runs was enough, I thought, and now the University should make a decision about permanence and, if it wanted the Program, settle upon the basic conditions of its existence. ( A canny “institution builder” might have tried to prolong the trial period indefinitely…)

     For the student of institutional reform the situation was not without interest.  I see it now as an encounter between the enduring and the ephemeral.  The enduring University is rooted in Departments, themselves based on the great cognitive disciplines that, over time, may merge and split, slowly altering the geography of the mind. But the basic fact of modern University life is the Department; the faculty members home is the Department.  That being said ( and this is an oversimplification) we will have to recognize that the University will also seem to be a great collection of Institutes, Schools, Colleges, Centers, Programs.  Some of these may be quite enduring, but they are administrative modes that facilitate the trans-departmental activities of Department members. Such activity may be quite important, exciting, fruitful, opportunistic, and the non-departmental organization enables the University to respond to challenges and opportunities without having to endure frequent or traumatic fundamental restructuring. The key to the relation between the enduring and the ephemeral is the institution of Tenure.  And Tenure is something you have (with a few ignorable exceptions) in Departments.

     This, then, was the context in which the question of the future of the Experimental Program presented itself.  Should it, could it, how could it move across the line that separated a trial venture from a regular more or less “permanent” part of the University.  Let me say, to begin with, that the University—the great ponderous soulless “multiversity” of popular caricature—had shown, in my case, remarkable openness and flexibility. As a professor, I had presented to the University, from the back benches, a radical educational proposal and within a year the Program was in existence.  During that year I was teaching a full load of courses in Philosophy and serving as Chairman, so that arguing for, planning, and launching the Program was essentially a spare time activity.  It was an adventure, but the real point is that the University listened, smiled faintly, nodded, made a slight adjustment in the distribution of its resources (nothing much—perhaps a million or so) and said “go ahead”!  Now, four years later it said “not bad” and waited for me to make the next move.

     But while the University had been flexible and hospitable, I had done nothing to get the Program rooted in Berkeley’s soil. The first run Berkeley faculty had returned to their Departments and had no continuing connection with the Program. The visiting second run group had gone home.  I was left, panting, in my home department.  Where was the Program?  Who cared?  The House, unused, was, by a delicate act of University courtesy, held for my decision until I relinquished it. But I had no working colleagues on the scene and was uncertain about what to do. On the one hand, the prospect of continuing to work in —to live within, really—a Program with colleagues like those in the second run was very appealing, although I liked normal academic life also.  On the other hand, the tangle of problems and decisions that loomed over the path to permanence —and as I said, I would not do another trial run—was daunting.  But overshadowing personal considerations, although intertwined with them, was my loyalty to the idea of the Program as a great educational form and the sense of guilt that overwhelmed me at the thought that the Program might go out of existence because I lacked the energy or ingenuity to keep it going.

     Everything was in limbo when I got a call from a newly appointed vice-chancellor.  He had himself been appointed by a newly appointed Chancellor. (Chancellors were ephemeral in those days. This was our third or fourth in about as many years.) The message was that the new Chancellor wanted me to continue the Program.  Would I draw up a proposal?  So I drew up a modest proposal.  A Program to start each year with about 150 students and 6 professors in each group. I suggested that 3 of the faculty were to be permanent, tenured, faculty who understood and were committed to the Program and who could guide the 3 transient faculty members and provide stability and experience.  In addition to thus keeping the Program in existence and available on a modest scale for Berkeley students, I wanted, by inviting the right visiting faculty, to foster imitation by State and Community colleges for whom such a lower division program might be a boon. And finally, I proposed that we undertake to become a center for the study of higher-education teaching, the absence of which had seemed to me to be a scandal not mitigated by Schools of Education. I gave the three-page proposal to the vice-Chancellor and within a very short time he told me that the Chancellor was sold on the idea and wanted me to go ahead.  So, what did I want?  What should we do?

     I was not surprised that the Chancellor wanted the Program. I thought any educator in his right mind would want it. It was inexpensive, unobtrusive, daringly innovative, serious, highly regarded throughout the country and even abroad, worth its cost in public relations alone—to say nothing of the real point, that it was a great educational program.  I agreed to go ahead and got down to cases with the vice-chancellor.  He was a very engaging young man, apparently marked for high administrative positions. But at that time he seemed to me to be quite naive about the academic facts of life, not knowing what was easy and what was difficult, not knowing the score, hardly knowing what the game was.  But enthusiastic.

     I went to the heart of the matter quickly.  There was only one thorny problem: Faculty.  We would need a skilled cadre.  I could not do it alone.  I would need colleagues.  I already had three in mind, all of whom held tenure positions at their own very respectable institutions.  I had taught with them and knew they were very good.  I could probably induce them to come. But I would only invite them if I could offer them tenure. So besides myself I would need three to five tenure slots.  Everything else was easy. I think my request seemed reasonable to the vice-chancellor.  He raised no objections, and said I’d be hearing from him.

     But I did not hear from him for quite a while.  I did not expect to.  I thought he would be discovering the difficulties in the way of capturing tenure slots.  Hard-nosed Deans might not make a fuss about temporary Programs they did not believe in, on “soft” money.  But tenure slots were a different proposition. They were precious, and departments fought over them.  Their assignment determined the fate of Departments and the shape of the University’s future.  Toleration for an ephemeral maverick program was one thing; giving away tenure slots was quite another.  So I waited, knowing the vice-chancellor would be encountering static.  As the personnel deadline approached I indicated that I needed a decision.  Eventually the vice -chancellor informed me, with some irritation, that he was not going to turn over a half dozen tenure slots for me to dispose of as I wished (not that I had put the request that way).  I did not argue, and the deal was off.

     But this was rather uncharacteristic of me.  I had not, in the course of establishing and running the Program, acquired the habit of taking “no” for an answer.  So why did I not try to go around the vice-chancellor, or over his head, to the Chancellor, or even the President, as I had been prepared to do in the past when necessary.  That is an interesting question, and when I try to put my finger on the crucial point at which the Program lost its life it comes to rest here.  Not with the denial of the tenure slots, but with my decision not to fight the denial—even though, in the end, I might have lost that fight.  It was convenient for me to explain, when I was asked, as I frequently was, why the program went out of existence, that the University was unwilling to assign the necessary tenure positions.  That answer, while true enough, sounds as though I am placing the blame on the University, on the vice chancellor or on the lesser Satraps who were stiffening his spine.  They were, no doubt, formidable adversaries, but not as formidable as my own doubts that paralyzed me at what might have been the moment of battle.

     My own doubts.  I could not solve the tenure question, in principle, to my own satisfaction. As I have mentioned, tenure was only granted to people in departments and on the recommendation of departments. Two of the people I had in mind held tenure positions in their own philosophy departments. Should I approach the Berkeley department with the proposition that they should take on my candidates as tenured members of the philosophy department, grant them indefinite leave to teach in the program with the option of deciding, at any time, to teach philosophy courses instead?  The department had, in fact, recently gone through a bitter battle about such a case and I knew how hopeless such a request would be had I the gall to make it.  I won’t elaborate on the complexities of this situation, but it was clear to me that I could not hope to plant the cadre in various departments, enjoying, as absentees, the privileges of tenure.

     The alternative was to consider tenure without departmental status—an almost self-contradictory notion. We now have a few “university professors” whose tenure may transcend departments and even a particular campus, but in those days they had not yet been invented, and they were not designed for our situation. So how about simply pressing for tenure in the Program as a justified novelty.  I thought of it, of course.  But first, who knew how long the Program would last?  And if it terminated, tenure would not persist like the grin of the Cheshire cat, it would vanish with the Program. But besides the risk that I was unwilling to invite others to assume it was not at all likely that the highest university authorities would approve some form of non-departmental organization supporting tenure appointments.

     And if, in spite of the odds, it did—and this was the crushing difficulty, the one I never discussed but that weighed on my mind—I was not sure that I would want it or could recommend it.  I knew how intimate and abrasive life in the Program could be.  I thought it very likely that sooner or later friends would fall out, would disagree in ways that would make working together impossible, might get fed up with each other or with the Program, or with students at less than arms length, or would, in sheer exhaustion over the toil of collective life, yearn for the healing privacy of a course of one’s own.  A yearning we could not satisfy in the Program.  I was not unaffected by the falling out among friends in the first run, and was even more troubled by the fact that in the second run, in spite of my caution, I had invited a friend who became so upset by his disagreement with the rest of us about how to teach that he soon withdrew in embittered rage from communication and interaction and was a dead weight for almost two years.  We were stuck with him because I had invited him for two years and he had taken leave, etc.  Suppose he had had tenure?  This experience points up the virtue of departments. A department member does not have to get along with anyone. He can despise his colleagues to his heart’s content.  He need have nothing to do with them.  And he can get on with his teaching and research as he pleases. Tenure in that made sense; but tenure in a Program?  I was baffled. Tenure was necessary. Tenure in departments was not in the cards. Tenure in a program alone worried me.  I anguished over the problem.  I considered all the things you are now about to suggest.  But baffled I remained.

     And I was unnerved by other doubts, not about others, about myself.  Life in the Program, especially during the second run, was enormously exciting, and I was really willing to do it again. But why was I so exhausted?  I remember one day in the first run when, late in the afternoon, as I was settling down to some task, I saw our Mathematician sauntering towards the door.  I must have sent him a reproachful look because he turned back to me. “I know what you’re thinking” he said, “but let me tell you something.  I do all my work, all the reading, attend everything, meet my seminars, confer with my students.  But I’m not going to overwork like you.  I’m going to work the way a professor should work. This is an educational experiment, but if it can only work if the professors overwork, the experiment is a failure.  You’re working too hard.  I’m not going to, and I’m right…” He was right, of course.  I did work too hard. I realize that the work of establishing a new Program was far greater than the work involved in teaching in an established, on-going enterprise, that life in a continuing Program would get a bit easier. But at the very least, teaching in the Program was a heavy full-time job, whereas, at Berkeley, teaching was considered only part of a professor’s work.  He was supposed to be doing scholarship, research, writing as well, and his relatively light teaching load reflected that fact.  Was I to become a full-time teacher?  I didn’t really want to, although I was devoted to teaching.  I wanted—should I be ashamed to say?—to live the life of a normal Berkeley Professor.  Sometimes, during the Program, I would cross the campus to see old cronies at the Faculty Club.  I was like a harassed mother who had escaped her demanding brood for an adult lunch-break.  I felt the seductive charm of “normal” academic life—the intellectual tension, the pervasive wit, the intellectual privacy, the leisurely autonomy, the cool arms-length, controlled, well-mannered involvement, on one’s own terms, with others.  I missed it, and I shrank from the thought of giving it up for the unremitting intensity of life in the Program.  And if I felt that way now… Well, these were secret thoughts, unsharable, treasonous.  Was I really prepared to wrestle endlessly with the recalcitrant young for their own unrealized good or to live the life of a missionary in a corner of a gaudy Rialto.  The very question was enervating, demoralizing.

     So, when the vice-chancellor told me there would be no tenure slots I did not argue.  I did not spring into battle in order to face, if I won the battle, a problem for which I had no solution.  Perhaps it was simply weariness.  Perhaps something was operating at a deeper level, something which I have no desire to understand.  I let it go.  But now, when I think about the Program’s failure to graduate from “ephemeral” to “enduring”, in spite of its unique quality, I do not blame the university, I blame myself.

     The fundamental delusion may have been to suppose that it was possible for a great organism like the university to nourish or sustain for long an enterprize at odds with its essential nature.  The mode of life required by the Program was not congenial to the normal Berkeley professor, violating the basic assumption that one teaches what one is expert at, as one thinks best.  Experts teaching their subjects to students who want to study it is our ideal condition—the best experts and the best students.  Some requirements, some structured sequences. Courses, courses, courses—the established American pattern of schooling producing, not infrequently, the tough, provocative course that lingers almost alone in the fond memories of alumni. The pattern common to Harvard, Yale, Stanford, Berkeley, Swarthmore, Oberlin, Wesleyan, Amherst, Smith, Michigan, Wisconsin, Columbia, Fresno State, Ohio State, and almost everywhere else. It is easy.  We know how to do it.  It may not be all that good, but it cannot be all that bad.  It has, after all, made us what we are.  So it is not surprising that our basic pattern persists, taking ephemeral challenges in stride.  A daring young president summoned Meiklejohn to Wisconsin and gave him some running room for five years before the weight of the patient regular faculty prevailed and the Experimental College vanished.  Hutchins, with great energy and flair, created his college (not really all that radical) at the University of Chicago and, as he told me ruefully, the University proceeded to dismantle it as soon as he left.  Scott Buchanan and Stringfellow Barr launched St. Johns on its significant path and it is still alive after half a century.  But it is not a part of a university; it stands apart, a church served by its own devoted priesthood.  The University of California’s Santa Cruz campus flirting initially with a “college” organization and, never trying anything terribly different, becomes more normal every year as the department, under a different name, increases its dominance over the college.  The fact is, our prestige institutions are content to be what they are—course-givers with, perhaps, a few local variations.  The key is, of course, the faculty.  It is what it is, not something else. It does what it does best, and it is hard to get it to do anything else, and perhaps unreasonable to try. Meiklejohn and Hutchins tried to do different things with specially recruited faculties living as second-class citizens within the domain of the regular faculty. St Johns does different things, but with a “different” faculty and lives beyond the main stream.  Santa Cruz wanted to do different things, not really knowing what they were, but it wanted to do them, as I learned when I was asked to serve on committees, with a Berkeley-style faculty.  In the Program at Berkeley I wanted to avoid the “second-class citizen” problem by getting some Berkeley faculty to act differently, and ended up in the war of the first run.  For the second run I gathered a non-Berkeley faculty that did different things brilliantly, but I could not solve the problem of turning them into Berkeley faculty.

     The nature of the faculty sets limits to the possibility of “reform”—taking “reform” to mean not mending one’s evil ways but rather reshaping the structure of learning and teaching( and, of course, the better the faculty the more difficult it is to expect it, or even ask it, to change its ways).  Within our conventional limits we hail as innovative the establishment of the great course.  It can be a course in Western Civilization, or in Great Books, or Integrated Humanities or Integrated Social Science, or American Civilization, or Citizenship, or World Culture… Each is an attempt, frequently successful, to mitigate fragmentation and excessive specialization, to provide some integration and perspective.  They are usually founded by the vision and energy of a powerful faculty member and persist, even with diminishing elan, as cherished and distinctive features of the institution. Birth by committee is, I believe, rather rare, but I am not opposed, a priori, to miracles.  Beside the special course and the addition of new courses the educational change generally compatible with the basic structure is tinkering with requirements and sequences, sometimes sparked by genuine educational considerations, sometimes by political pressure triumphant, even, over responsible faculty qualms.  To a disappointed or frustrated idealist like me these minor matters are barely worth the candle substantively but may be grimly amusing to observe as they reveal the interplay of intelligence, habit, power, of self-interest, ignorance and irresponsibility in the conduct of affairs in a great institution.

     Well, in the immortal words of Edith Piaf, I regret nothing. It was worth doing.  For many of us, it was a uniquely great educational experience.  I am proud of the exasperating ephemeral now-vanished child.  The House is still there, and, as I said, when I drive past it now nothing happens.  Almost nothing.  The other day as I drove by, a small drama from the past popped into my mind.  I had walked in at about 7 AM. No one was there. I looked with distaste at the disorder, the weary furniture, the carelessly strewn objects.  Then I stared in irritation at the enormous poster, the head of Dylan (Bob, not Thomas), lording confidently over the great hall. Unavoidable, dominating.  It had annoyed me for weeks.  On a reckless impulse I stood on a chair, unpinned the poster, rolled it up and carried it off to my office.  Some hours later an indignant young man stomped in. “What happened to my picture?” I looked at him coolly.  “I took it down.”  “Why did you take it down?”  It was not really a question. I parried with, “Why did you put it up?” “I put it up,” he said, contemptuous in advance of an assertion of authority, “because I felt like it!”  “I took it down,” I said, “because I felt like it.”  He stood silent for a rather long moment.  Finally he nodded.  “Fair enough,” he acknowledged, reached for his poster, and left with the head of Dylan under his arm.  Whim baffles whim.  The memory of that small triumph of reason warms me.

     In the end, the Program must be judged to have made no enduring difference to the quality of education at Berkeley.  The sea of normal life has closed over the sunken hope, the surface now unbroken, the depths unvisited. I have never been tempted to launch a salvage operation or to get back into the educational wars, since, apart from other reasons, I seldom see a banner raised that seems worth repairing to—only trivial proposals, not worth fighting for, not worth opposing. And I have had my chance.

     When I look back at the Program through the haze of present distance, ignoring the details of small triumphs and small tragedies, banking the glow of old animosities, stilling regret over misplays or false moves, several things stand out.  First, the struggle to achieve something of a working intellectual community—a group of faculty and students engaged in a common enterprise, creating a structure of ritual and habit triumphant over the impulses of disintegration—an intellectual community as a way of life, sustained for a significant period of time.  It was, in a world of discrete, self-contained, autonomous classrooms a glimpse, a reminder of the quality of a Pre-Babelian world. That glimpse of community is, perhaps, the most dominating of all my memories.

     And second, the curricular conception—the attempt to provide for our present crises the cultural context within which they are to be understood.  Something has happened when you can grasp the thread that runs from Orestes and Antigone to West Virginia v Barnette and the Presidential campaign of 1988.  When you can see that the attempt to impose the Tablets of the Law upon the worshippers of the Golden calf is the same struggle as is involved in our attempts to make the Constitutional Covenant and the Law prevail over our hedonic impulses and narrow partialities.  The failure to provide this great context is to send our students, robbed of their proper clothing, of their proper minds, naked into the jabbering world.  It is stupidly irresponsible of the University to allow this to happen.  It is a betrayal of its trust. It is, as I used to say, a consequence of the fact that the University, simply by being what it is, has killed the College.

     These convictions, with which I began, survive in me unimpaired, although shadowed now by frustration and defeat.

December 21, 1988,  Berkeley

“A Venture in Educational Reform” was first published in The Beleaguered College, Institute of Governmental Studies Press, University of California, 1997

Judicial Activism and the Rule of Law — Toward a Theory of Selective Intervention

Part One

As the Supreme Court goes about its work, distracting brawls break out among the spectators.  The pattern is familiar. When the court is seen as liberal, its liberal rooters urge it to continue its activism, reshaping the law in a liberal political direction, while its conservative critics urge it to remember that its task is to announce the law, not reshape it to the demands of a political agenda.  Liberal theorists develop and defend a theory of judicial activism; conservatives counter with a rule of law theory that denies the legitimacy of judicial policy-making.  The one encourages what the other condemns as a dereliction of duty.

     When the composition of the court changes and it is thought to have become politically conservative, the confusion among the spectators may seem comical.  Conservatives can be heard to mutter that their enthusiasm for judicial self-restraint and a policy-indifferent rule of law may have been excessive and that, at the very least, a period of corrective activism is needed to undo recent distortions.  Liberals, on the other hand, struggling vainly to reconcile themselves to adverse election returns, suddenly rediscover the virtues of a court “above politics,” a court that diligently refrains from reading its preferred social policy into the law, even trying to remember how to pronounce stare decisis as they attempt to draw its protective mantle over the recently hailed but now precarious innovations of the last few decades.

     The perennial conflict between a rule of law position and an activist position becomes especially bitter when an important judicial appointment is at stake.  Conservatives may want a conservative judge, liberals a liberal judge, but something about the situation keeps them from simply saying so.  They want, they may say—almost must say—a good judge, a neutral master of the judicial art who happens, more or less irrelevantly (as a small bonus perhaps) to be a liberal or a conservative, conceding that there may be something like a good judge quite apart from his or her political leanings. The argument, as we can remember, is full of fury and, in its course, may not significantly clarify the conception of a good judge or free it from its political entanglements.

     A crude liberal-conservative dispute may be displaced by an argument between two conceptions of the judicial function: the orthodox non-political view, and the challenging activist view.  The judge is thought to have a choice as to which kind of judge to be.  But this is a misleading way of putting the problem. The significant question is not whether it is better to be an activist than a rule of law judge.  It is whether it is possible, even if you want to, to carry out the orthodox rule of law program. It is one thing to say that  judges, if they wish, can slip into an activist or policy-making mode; it is drastically different to hold that they can not help doing so for reasons built into the very structure of any legal order.

     This is a deeply significant issue about the very nature of the legal order, and it is not always easy to understand. So I am going to try to provide a guide to the battle. I will begin by sketching a simplified but essentially correct version of the orthodox rule of law view. I will then show, with considerable reluctance—and to my deep regret—that it does not really work, and that this failure has serious consequences.  I will then see what can be saved from the wreck, and try to end on a reasonably constructive and even cheerful note.  But it is a bit like trying to continue to be religious after you have been cured of fundamentalism.  Less zeal, perhaps, but more understanding.

     I am not writing this primarily for lawyers, who generally think they know all about it but who seem unwilling or unable to explain the mystery to the uninitiated and who, if pressed, fall into their familiar habit of referring cryptically to this or that case or murmuring something about the common law.  Lawyers can be an aggravating problem, but I worry here about us plain citizens baffled by legal mysteries.  So I will try to explain what it is all about without mentioning Marbury v Madison or Roe v Wade or any other real case.  I will make up examples if I think I need them, and I will use plain English throughout, in the belief that if the problem cannot be explained that way it cannot be explained at all.

     First, a word about the “rule of law,” a reminder about why that seems important.  It is not that we really want a world in which every contingency is covered by a rule or a law, in which we are overpowered, hemmed in, governed by a mass of imperatives, left with no anxiety-producing discretion in a world in which an incorruptible judge always has the last word.  But experience has taught us that it is usually better to have laws than to live subject to the discretionary judgments of others. In the end, we want to have traffic laws rather than to leave it all to police instructed simply to stop and punish, as they think best, anyone they think is driving improperly. The real point is the insistence that, where we grant political authority over aspects of our lives, such authority be exercised within the constraints of laws that limit and guide it.  When I apply for a permit I want to have to show that I satisfy the conditions in the law; I do not want to have to convince an official with unguided discretion that granting it is a good idea.  We are, on the whole, beyond regarding lawmaking itself as a necessarily malign intrusion of the mischievous human will (infected with unintended consequences) into an otherwise benign nature.

     To insist that the rule of law be protected by an independent judiciary is to seek something more than the dutiful self-restraint of officialdom.  We want courts to settle the question of whether someone has exceeded the limits set by the law. And we want judges to be free of essential dependence upon the wielders of power so that they can do what they are supposed to do without being intimidated.

     Such a view of the rule of law and an independent judiciary is something that every American drinks in with his mother’s milk. It is an essential feature of our religio-secular “constitutionalism”: 

     The Constitution, solemnly sired at a Sinai-like moment in history, expresses the covenant, the social compact, the agreement, the consent of the governed, that lies at the foundation of our legitimate political life. In a world weary of the sway of arbitrary rule it tames raw power into authority, subdues ruling into office-holding, creating a system of individual rights and of legitimate but limited governmental power.  It provides for a legislative, an executive, and a judicial branch, each of which is to perform only its own distinctive function (oddly echoing Plato’s definition of justice in the Republic). The legislative and executive branches are to shape politics into law and policy. But the judicial branch, removed from politics, is the guardian of limits, of legitimacy, of “constitutionality.”  It stirs into operation only when there is a claim that a law has been violated.  Does Smith ignore the “Stop” sign?  Does the legislature pass a law about school prayer?  Does the President issue an order about travel to a foreign country?  The Court may be called on to decide whether Smith violated the traffic law, whether the legislature violated the law—the constitutional rule—about religion, about whether the President exceeded his authority over foreign travel.  No one, no person, public or private, is immune to the demand that they act within the law.  And this is the basis for the special place of the Court in our scheme of things.  It is the guardian of the system, not a partisan within it.  In a gaming culture like ours it is natural that we think of the court as a glorified umpire or referee.  It is on the field, but it is not a player.  It knows the law and enforces it impartially, calling it as it sees it, unconcerned about particular outcomes, about who wins or loses, essentially, we think or expect or hope, above the transient struggles and quarrels that it adjudicates.

     This, or something very much like it, is what is wanted when the cry goes out—reaching us now from behind the rusted iron curtain—for the rule of law protected by an independent judiciary.  Without this as a background the very shape of “judicial activism” cannot even be seen, for it is essentially a challenge to the conception of the court built into our legitimizing constitutional myth.  Judicial Activism is an iconoclastic position, shaped, like all such positions, by its particular icon. While I have given a brief sketch of the familiar myth—if it can be called that—I must here emphasize the feature most subject to attack, the point of greatest vulnerability—the “separation of powers.”

     The phrase “separation of powers” does not appear in the Constitution itself, but it is taken for granted as part of its theoretical context.  There is nothing terribly mysterious about it. In its pure form, each branch of government is to exercise only the kind of power that has been given to it, not the kind that has been given to another branch.  So, we say, the legislature makes policy judgments and embodies them in laws; the judiciary is not to judge the wisdom or value of the laws but is only to say if they are violated or if they themselves violate higher laws.  That division of labor seems clear enough.  Ends, goals, policies, values, and the means of pursuing them—these are the concern of the legislative and executive, the “political” branches.  It is not for the judicial branch to decide whether abortion or capital punishment or affirmative action are good or bad things and should be prohibited or permitted or required; it is only to say whether they are constitutional.  It is not, in short, supposed to make policy or decide policy questions. 

     If this is the crucial point of the traditional or ceremonial view of the Rule of Law, the view of Judicial Activism is that judges make policy all the time.  And not just because judges don’t want to do what they are supposed to do but because it is impossible to do only what they are supposed to do in the mythic role. The serious argument about Judicial Activism is an argument about the unavoidability of policy-making by judges. Activism thus involves, as we shall see, a rejection of the theory or practice of “separation of powers.”

 * * * * * * * * *

     Let me begin by simplifying the traditional view into a few essential propositions:

     First, there is a special class of “judicial” questions to which the court is limited.

     Second, there are “correct” answers to such questions.

     Third,  it is possible to find judges capable of performing the function of providing correct answers to the proper questions “objectively” or “properly” (not infallibly).

    Taken together, these constitute a view of deceptive simplicity, but they go to the heart of the matter.  If there is no distinctive class of judicial questions—like, has a law been broken?—the court will indeed come close to being “a Kadi sitting under a tree” dispensing its wisdom about a variety of matters, not simply upholding a rule of law.  If there is indeed no “correct” answer, the “rule of law” simply serves as a cover for unconstrained judicial policy-making, the search for good or moral or wise answers by those not elected to try to find them.  And if judges, because they are human and socialized and all that, are necessarily conscious or unconscious partisans, why cling to forlorn hopes of judicial objectivity or neutrality?  So, if it can be shown that the court cannot confine itself to rule-conformity questions or that there are no “correct” answers to such questions, or to many of them, or that Judges cannot be seen as tolerably good correct-answer-seeking animals—the case for activism seems to triumph without further ado. The first stage of my argument will consider these matters.

          I.     Proper Judicial Questions

     The argument goes:  There is a limited range of questions appropriate for a court.  They are all of the form “Is X in conformity with the Law?”  The distinction is between questions of rule-conformity and questions of “utility” or “value”—between “Is X in conformity with the law” and “Is it better to do A than B?”  Thus, the question of whether it is a good idea to keep students off the basketball squad if they fail a single academic course is a question of policy for the school board.  If a student takes the school board to court, the question for the court is not whether it agrees with the Board about its policy but whether the Board`s decision is in conformity with some relevant and over-riding law.  Or, if a State adopts capital punishment for murder in the belief that it will improve the general quality of life, the court is to decide only if the State`s action violates some law by which the State is bound, not to overrule the State if it has a different view about the value of capital punishment.  The judicial question is one of rule-conformity not of wisdom.

     These are not just different questions, they are different kinds of questions. And they evoke two different kinds or modes of argument—the casuistic (or Talmudic), and, what might be called loosely, the pragmatic or utilitarian.  These are the two great and significantly distinct kinds of argument that we are all constantly involved with, and it is necessary to distinguish them because, as you can guess, the Court is supposed to stick to one of them.  Casuistry is the art of applying general rules to particular cases and, in the process, where necessary, resolving, by interpretation, apparent contradictions. The argument that a fetus is a “person,” that abortion is the killing of a person and thus a violation of an overriding law against killing persons is, without aspersion, a casuistic argument. (Pascal’s attack on “rule twisting” Jesuits in the 17th Century gave “casuistry” a bad name.)  The argument that each person is sovereign over his or her own body and that an abortion decision is an exercise of that sovereignty is also a casuistic argument.  The argument that abortion is an effective means of population control and tends to increase—or decrease—happiness is, by contrast, a pragmatic or utilitarian argument. It is obvious that if courts are to consider only questions of rule conformity, the mode of argument appropriate for it is casuistic, and long policy arguments in briefs or in opinions ought, at least, to raise eyebrows. 

     I pause for a passing nod at a natural and significant question: Even if we can distinguish these two kinds of question, why should courts or judges confine themselves to one kind, and the least interesting and important kind at that?  Why should they limit themselves to finding that something is in accord with the law or “legal” when it is clear that it is a stupid or harmful law that can, with the available techniques, be effectively nullified?  Why not simply act on the familiar principle that you are always to do what you think will promote the greatest happiness?  The answer—that it is not your job to impose your wisdom on the situation, that that is precisely what the separation of powers means, that you are just to deal with legality and let others, usually elected and answerable, deal with the more important matters—this answer may seem a pedantic, crabbed response entailing a sheer waste of wisdom.  If that’s what “separation of power” means, why respect such a peculiar principle…?  The reply would have to be a long one, aimed, among other things, at curbing the judicial desire to do more good than the job calls for.  A reminder, as someone said, that while Judges are to be lions they are to be lions under the throne.

     Let us now examine the conventional assertion that the court’s task is to deal only with rule-conformity questions. I will try to show how the proper judicial question, “is X in accord with the law?” seems inevitably to turn into or be displaced by the naughty question  “Is A better than B?” (demanding a “policy” response), to show, presumably, that the rule of law program cannot be carried out.

        That the court cannot simply apply the law because the laws are contradictory.

     The argument is that if laws “contradict” each other, give conflicting instructions, you cannot simply apply the law but must decide which it is better to follow, which is the more important, which weighs more heavily in the scales as you “balance” the conflicting values.

     Suppose that the constitution or the laws authorize the government to do such things as regulate commerce or wage war or run a public school system. Suppose also that the government is, by the constitution, forbidden to abridge the freedom of speech.  Imagine that the government, as authorized, enacts laws that punish deception in advertising, or the revealing of defense secrets, or that require pupils to read assigned books and submit to recitation and examination.  Do not such laws abridge the freedom to speak as, and only as, one pleases?  And is not the government forbidden to do that?

      Confronted by apparently conflicting rules—and we are confronted by them everywhere—what are we to do?  We cannot simply “follow the rules” since what we do in obedience to one is a violation of the other.  A law punishing deceptive advertising is authorized by the grant of authority to regulate commerce; it is prohibited by the law about not abridging free speech.  Obvious simple principles may not help—for example, that a grant of power extends only to the edge of a prohibition as, “You may…but only so far as it can be done without….”  You might accept that as it subordinates the power over commerce to the protection of free speech.  But would you also hold that government may try to win the war provided it can do so without abridging speech to keep the invasion date secret?  Or try to educate children but only if it can be done without making them read or answer questions on exams (self-incrimination too!)?  “Prohibitions mark the limits of powers” is a plausible principle, but you cannot ride it like a bull into life’s china shop.  On the other hand, if a prohibition doesn’t limit a power, what does it limit? And when?  And how are we to decide that?

     In this common situation, faced with apparently conflicting rules we find ourselves torn between two paths—the path of casuistry or “trying to make sense” and the path of utility or value maximization or, to use a familiar judicial term, “balancing.”  The casuistic instinct is to try to show that what seems like a contradiction is really not one, that a proper reading of the law will remove the apparent conflict and restore the situation to one in which it can be decided whether or not something is in accord with the law, properly read.  Thus, it might be discovered that “freedom of speech” in the First Amendment is properly read as meaning the freedom of “political speech” and that the regulation of “commercial speech,” advertising, is not barred by the amendment.  Or that the First Amendment applies only to adults and, therefore, not to minors in school. Or that the War Power enjoys exemption from peacetime constraints.  These are oversimplified quasi-hypothetical examples and I don’t want to agree with or argue them.

     The other path tends to take the apparent conflict as an example or symptom of human and social incoherence illustrating once again that a yearning for consistency is a petty obsession, that the law is a charming patchwork, not a coherent scheme, that it makes no sense to try to make sense out of a situation that makes no sense, and that the thing to do when confronted with these “contradictions” is simply to decide what the best thing to do is, forego the hairsplitting, and “balance” the competing values on whatever scales you happen to have. As, for example: curbing advertisers is more important than letting them speak freely; or, perhaps, protecting free speech is more important than punishing commercial deception, winning the war is more important than free speech; educating children is more important than letting them take the First Amendment with them through the schoolhouse gate…making, in each case, a “balancing” or policy decision.

     We don’t need real “legal” examples to get the point.  We could just as well consider a “Thou shalt not kill” sign at a church barbecue.  You might explain to an alert bewildered child that the commandment really means “Thou shalt not murder a human  being” and doesn’t, therefore, apply to the animal whose ribs we are about to gnaw—starting her down the primrose path of casuistry. Or if you want your child to eschew a life of quibbling you can start her down the path of pragmatic balancing by pointing out that there are no “absolutes,” that “Don’t kill” is a fine idea but so is “Man does not live on bread alone” and you have to balance respect for animals against our need for protein and it’s O.K. to have a barbecue if you don’t eat too much or waste anything—having her complain to her analyst years later, that you taught her to be a calculating compromiser.

     Confronted by apparently conflicting instructions we can try to unpuzzle the matter by making distinctions, resolving (or creating) ambiguities, clarifying meanings so that we can ask, in the end, “does the regulation of advertising violate the rule that says, or really means, you cannot abridge freedom of political speech…”  and look for a correct answer, in this case “No!,” even from a judge who believes in the free market.  That is, if there is a respectable art of figuring out, or discovering, what the rule “really means.”

     What something really means!  What it means as “properly interpreted,” as having been put through the interpretive mill.  There is an interpretive art, not always regarded as respectable.  I remember an exasperated Divisional Commander driven to including in an order a stern “This is to be carried out and not interpreted.”  It is an art or exercise that has a fascination of its own, a deep pleasure in fitting pieces together so that everything suddenly makes sense.  “Makes sense” or, alas, gets you out of it, or lets you do it in spite of what it seems to say.  Is it an art of cognitive discovery, capable of going wrong, of falling into misinterpretation, subject to heavy constraints of objectivity?  Or is it, through and through, a partisan art, more or less clever or smooth or strained or ingenious, but never able to claim, beyond enabling or disabling, being really right or objectively correct.  Is it like deciphering a coded message, so that you know when you’ve got it right, or is it more like a reading of a musical score?  And who says “speech” in the First Amendment means “political speech” and what makes you think that’s the right interpretation?  Or that there is a “right” interpretation?  Do you applaud it because you want to restrict advertising but not the criticism of government…?

     It is a part of the burden of the conventional rule of law view that, where interpretation is needed, as it usually is (and I pass over the question of whether all reading, even when you think the meaning is plain or clear, is “interpreting”), there are correct and incorrect interpretations.  This is a terrible hurdle in its path since it seems to be widely held that where two interpretations are possible neither can claim to be correct—at least not in this democratic age in which one person’s interpretation can be no better than another’s. I defer the pursuit of this problem, noting only that the attempt to resolve apparently conflicting orders into an unambiguous rule plunges us at once into an interpretive quagmire, forced, if we wish to maintain the traditional view, into a defense of interpretive “correctness.”  Not, perhaps, hopeless, but difficult enough to tempt one into skipping a difficult journey with an uncertain destination and settling for the simpler—and possibly unavoidable —life of “balancing” competing values.  If, in the end, the test of an interpretation itself is how well it serves your purpose, not whether it is “correct,” why insist on being dragged kicking and quibbling into the world of subjectivity and partisanship, clinging to vain hopes of an objectivity or correctness that went out of style when it was announced that “God is dead!”?

     So one great difficulty with the first item in the rule of law creed is that, to resolve apparent contradictions, ” Is X in accord with the law” will become “is X in accord with the law as properly interpreted,” in which the propriety of the  interpretation itself may be thought to rest on whether the interpretation supports a valued result.  To the extent that the question becomes “is interpretation A better than interpretation B ?” instead of “is X in accord with the rules?” the round goes to activism.  To avoid that result it must be shown that the “correctness” of an interpretation is grounded in something other than its policy consequences.  Not hopeless, but difficult.

     I should point out that while casuistry or interpretation has its problems, “balancing,” frequently invoked in judicial opinions to deal with contradictory or incompatible rules, while suggesting scales and weights and objective measurement is something like a conjurer’s operation. “We now must balance the value of A against the value of B,” says the court with a hurried glance at its invisible scales, “and A is clearly weightier than B!”  Whenever the court says it has a balancing problem you know that the rule-correspondence question, the question of whether something squares with the law, has, at that point, given way to  a pronouncement of policy or value-priority.  If you look for it, you will be surprised at how frequently this happens. “Balancing”, whatever that may be, seems easier than following a twisting path in search of the correct  interpretation.

            So:  In any large set of rules there will always seem to be contradictory or conflicting orders. In some cases the conflict can be resolved, casuistically, in a way that strikes almost everyone but the most rabid partisan as obvious. It is not necessary to rush from the mere appearance of contradiction into the balancing act (although if you want to, there is always a pretext).  Having said that, it must be granted that not all contradictions or conflicts can be made to vanish by the exercise of casuistry.  In such cases some policy decision about priority or importance or value needs to be made, and that is not a rule-conformity decision.  For the Rule of Law “fundamentalist” this is something of a scandal, and I will return to the problem in due course.  I note here only that if or where the rules are in conflict it is not possible to simply follow the rules or stick to rule-conformity questions without first resolving the conflict, and that process is a very slippery one and seems to make it impossible for the court to stick to its own “proper” question.           

         “Reasonable” and other weasel words

     Another problem: There is a class of “soft” words whose appearance in a law is said to introduce or even force policy considerations.   

     “Reasonable” is a familiar troublemaker.  It appears frequently in laws—as when someone is given the authority to make reasonable regulations about something.  Even when it is not explicit, it is sometimes held that all authority is granted with the implicit proviso that it be exercised reasonably, or even that a “due process” clause expresses that demand.  So it is not uncommon that a court will find itself considering whether a government agency or agent has violated the law by exercising its authority “unreasonably.”  The reason I call this troublesome is that, as we know all too well, “reasonable” is confused with “wise,” and when that is done  a policy question has been substituted for a rule-conformity question.

     For example, a school board enacts a rule (a reasonable rule, it thinks) that a student cannot play basketball if he is failing a single required course.  We can imagine the argument that had taken place—the need to emphasize academic work, to improve classroom performance, to come down strongly or not so strongly, etc.  There are harder and softer options, but in the end, the advocates of the one-course rule prevail.  A student fails, is kept off the team, goes to court. “Has the Board acted reasonably” is the question.  The judge may well think it too severe a rule, as likely to increase the drop-out rate, as ruinous to the hopes of the fans, and he may even be right in supposing that on the whole it would be better if the Board were to adopt a more lenient rule.  Had he been on the board he would have voted against the one-course rule.  Nevertheless, it would be ludicrous, if, as a judge, he ruled against the board.

     “Did the board act reasonably” is the question. Not, “Did the board do what I think it should have done.”  Not, “Did the board act wisely.”  The Supreme Court has repeated to the edge of weariness that the court is not to substitute its wisdom for the judgment of the responsible agency, but the temptation seems irresistible. On questions about which reasonable people can disagree some will be wiser than others, but the court is not to declare that the reasonable view it may think unwise or less wise should, therefore, be declared unreasonable. We are, I assume, beyond the familiar urchin view that ” reasonable is what I think should be done.”

     Besides “reasonable,” there are words like “proper” or “due” or “appropriate” or “fair” whose appearance in a law  seems to be an invitation to a judge to take a hand in the making or correcting of policy.  In all such cases it is to be remembered that the court is not being asked to decide what to do about a problem but to judge whether someone with primary responsibility for the matter—a legislature or an administrator—has exceeded the limits loosely suggested by these troublesome words that signal a delegation of responsibility more than clearly defined guidance.  So, one can acknowledge that the board has acted reasonably even though one would have acted differently.  But admittedly, there is a very hazy zone that cannot be conjured away, and I do not intend to try to conjure it away.  “Is this reasonable enough?” can seem close to “Is this good enough?” and to the extent that this is so it will trouble anti-activists that words like reasonable are scattered throughout the law and constitute a standing invitation to judicial policy-making.

     The attempt to eliminate such words, to tighten things up, to get rid of ambiguities is understandable but generally misguided.  Ambiguity has important functions.  It is not always a vice, just as clarity is not always a virtue. The point about these expressions is that they are ways of postponing decisions that probably should be postponed, of delegating decision-making scope to others more likely to make informed judgments.  To say “make reasonable rules…” relieves you of the hopeless task of trying to spell it all out here and now. We may have to abandon the view that since we now know it all we had better make sure that everything is clear and nailed down tight because from this high point it’s all downhill.  But it is wiser, I am sure, to continue to make use of ambiguity, even though it may create some problems for rule-of-law fundamentalists.

The problem of new instances

     But the difficulties created for the strict opponents of judicial policy-making by apparent contradictions and by words like “reasonable” are nothing compared to the disasters that open unexpectedly at their feet in the most ordinary of situations, unmarked by paradox or contradiction, undistinguished by the use of mushy language or by flaws in expression.  In the normal course of events it is necessary to decide whether or not something falls under a rule.  A law governs the taxation of religion; is that a religion?  Is a skateboard a “vehicle?”  Is delivering a message engaging in transportation?  Is a live-in same-sex lover a “spouse?”  Is a fetus a “person?”  There is no need to multiply examples.  In a changing world we are always being faced with the question of whether something falls under a rule, is another one of “those,” to be treated as other things of the same sort.  (The morning news reports that the Court is going to have to decide if a “cordless” phone is a telephone. With no wires to tap is it O.K to tune in? )  We may think we know what a “spouse” is and have provided, by law, that an employee’s spouse is covered by the medical plan.  But is Smith’s live-in same-sex lover, Jones, a spouse?

     I will linger over this example.  The common sense, the naive view, would be that a court is to discover the meaning of “spouse” and then see if Jones is one of them. The same common sense would consider it a scandal if the court were first to decide whether Jones should receive the benefits and then adjust the meaning of “spouse” to get the desired policy-result, the very model of the crime of activism. So we will, in a quick search for the meaning of “spouse,” round up the usual suspects.  We want to see if the court can succeed in doing what, on the common sense view, it is supposed to do.

                            

 Definition

          A handy definition would seem to settle the matter, except that, as in most cases, the word is not given an authoritative definition.  Or if it is, it might still leave the matter unsettled—as might a dictionary.  The one on my desk says “a marriage partner; a husband or wife.”  But what is “marriage,” or “wife,” for that matter?  We are unlikely to have a definitive specification of “spouse” (or religion, or commerce, or telephone, or speech…) —neither an exhaustive list of the instances of “spouse” nor a specification of the properties that, possessed by anything, determines its membership in the class in question.  We may regret this and resolve that all terms in the law should be carefully defined, but it will turn out that for a variety of reasons this cannot always be done, and, in spite of the plausibility of that familiar battle-cry “Define your terms!”, should not be seriously attempted.  For much the same reasons as I mentioned regarding the desire to get rid of ambiguous words like “reasonable.”  We would be foolishly forcing ourselves to make decisions we are not prepared to make, are better off not making. 

Usage or precedent

       Let us suppose that no dispute-settling definition is at hand so we turn for help to customary legal usage.  For example, consider “religion,” protected in the Constitution, and not defined. Some time ago it was held to cover religions A and B and C. Then something else raised its head. “No need to define “religion,” says the Court, “we can tell that this is another one”—adding D. So now we come to this new candidate, E.  No definition of religion; not on the list.  But is this another religion?  Is E like A,B,C,D?  How much like? Essentially like? Sufficiently like? How like and in what respects it has to be like in order to be another one is a question that presents difficulties.  Does E “belong” with A,B,C,D? and therefore…or “should we treat E the way we treat A,B,C,D?”  Are these the same question? Are we to discover  whether or not E “belongs” with A,B,C,D, is really another one?  Or are we to decide  whether or not to treat E as we have treated A,B,C,D?

     “Spouse.”  Some time ago the law provided benefits for the wife of a public employee. Then, as more women took jobs, and after a big fight (What! Benefits for idle husbands!) “wife” was changed to “spouse,” so that both husbands and wives were covered.  And now?  Same-sex live-in lovers?  It is, of course, somewhat unprecedented, but the current range of precedents does not settle whether the rule extends to this situation since, as we know, the reach of a general term is not limited to already acknowledged instances. So, we are to consider whether the live-in same-sex lover is a “spouse.” 

     We compare homosexual couples with heterosexual married couples and discover that they are similar in a number of respects. They differ, in this case, in not being of different sexes.  But is that difference crucial?  Is it essential to being a “spouse” that you be one of a heterosexual pair?  The answer, oddly enough, is not altogether obvious, any more than the question of whether a belief in a universe-creating God is a necessary or defining trait of “religion.”  Is it really a spouse, really a religion are questions that take you into the thickets of “classification” —real, conventional, merely convenient for this or that purpose or as somehow carving the world at the right joints. Can you discover as a fact about the world, that the homosexual lover is not really a spouse?  Or, in the end, must you decide whether to treat him or her as one for reasons that are not discoveries about meaning but considerations of policy.

     In some cases there is something I call “intuitive strain.”  If we navigate more or less easily from Catholic to Protestant to Jew to Mormon (not a religion but a criminal conspiracy, a Supreme Court judge once said) to Unitarian and Humanist, including them, in turn, under  “religion” whose freedom is protected by the First Amendment, we might feel that proposing Communism as a “religion” is a strain on one’s hospitality.  Or: You are in the “delivery” business if you deliver furniture. If you deliver books? Yes.  Letters? Well, ok.  Letters by Fax?  Messages? By telephone?  Speeches, by television?  No, wait, don’t be silly. “Delivery” just means…  So, “spouse” may grow from wife, to husband or wife, to common law partners (“unmarried” but with three children) … to same-sex partners?  Admittedly, some strain, although we may feel the strain at different points. 

     “Intuitive strain” ( or “stretch” ) is a sloppy but not insignificant notion, full of difficulties that may prevent its being taken seriously. Its appearance here—how much “stretch” is tolerable?— is an occasion for taking up the question of “strict” construction and the contrast between strict and liberal (or lax or broad) construction or interpretation.  In deciding whether to admit a new candidate to the class in question, we do so on the basis of “similarity.”  How similar?  and similar in what respects?  “Strictness” is the tendency to be grudging about new admissions, to insist on greater similarity, especially with respect to any characteristic that, if dropped, generates intuitive strain.  Thus, the strict constructionist might insist that heterosexuality is an indispensible condition of spousehood.  A liberal or broad constructionist might say that the couple is committed, caring, economically intertwined, etc—in many respects like ordinary married couples and that a spouse is really like a partner or a friend and need not be one of a heterosexual pair, that “sexual orientation” is not a necessary, definitive trait of “spousehood.”  What is at stake here is inclusive generosity.  A strict constructionist would be against stretching “spouse” to include Jones; a broad or lax constructionist might not be. Put crudely, “strict” demands more similarity ; “liberal” is satisfied with less. The latter is more tolerant of “stretch.” (Put otherwise, “strict” tends to increase the number of necessary or defining conditions, “broad” or “liberal” to decrease them.)

          The question is, are we to consider strict construction correct and, therefore, required by the Rule of Law?  Alas! It is not that simple.  Strict and broad are merely two modes of interpretation and neither can claim “correctness” across the board. My liberal friend is a broad constructionist when it comes to interpreting “freedom of speech” in the First Amendment to include not only “speaking”, but picketing, or marching, or sitting down in the middle of the street, or burning the flag—almost anything as long as it is a way of “saying” something. On the other hand, suggest that “treason” should be stretched a bit to include not only giving aid and comfort to the enemy in time of war but to include sticking up for the potential enemy in a period of cold war—and he will rush to embrace some rule about how criminal laws should be strictly construed, not stretched.   We really need three notions. Broad, strict, and correct; and unfortunately, neither of the first two is always correct. So, although some opponents of activism seem to think “strict” is always right, it is only a mode of reading, and a commitment to a particular mode of reading is neither inherent in the idea of the rule of law nor explicitly mandated by the constitution.  If, however, you don’t want to accept the idea that there is a “correct interpretation” you are left with merely competing modes of interpretation.  And either you choose the one that gives you the result you want (“strict” if you want to exclude Jones, “broad” if you want to include him)— the activist scandal— or you apply some general theory about use of modes of interpretation regardless of results.  That might seem a rather pedantic, and even questionable, gamble.  In the end, instead of an across-the-board commitment to a single mode of interpretation we will find ourselves mapping the conditions under which the use of one or another interpretive mode is called for—a policy-driven venture in political theory.

     The upshot is that the determination of the crucial defining character (of telephone or religion or spouse …) is not a simple matter of cognitive inspection. The problem arises over the question of subsuming a candidate-instance under a rule. Grudgingness or strictness is not always correct, and the temptation is to substitute the question “should we decide that Jones is a spouse?,” clearly a policy question, for the subtly different question, “Is Jones really a spouse?” This temptation is almost irresistible in view of the complexity of the latter alternative, and we can see why what I have called the commonsense or naive expectation— that first we find out what “spouse” means and then we decide whether Jones is one of them— is likely to be deeply disappointed.  Sooner or later, the lurking policy question pounces and “is X in accord with the rule” gives way to “Is alternative A better than alternative B”. This situation arises with different degrees of difficulty whenever the candidate at the door seeks admission to the club.

    

 Intent

       But you must be impatient for me to get to this point. Surely, what “spouse” means, whether it includes the live-in same-sex lover, is a question of what the law-maker had in mind. Behind the law lies the intent of the lawmaker, and if there is some ambiguity about what the law means are we not to try to find out what was intended?  Is not the correct interpretation the one that expresses the intent of the law-maker?  This seems so plausible, so obvious, so undeniable, that it will not come as a surprise that the search for legislative intent has been a major legal industry. Our question here is with whether we can find in intent or in “original intent” an alternative to judicial policy-making as when, avoiding questions of strict or liberal construction, we can correctly answer the question “Is Jones a spouse” by looking for the intent of the lawmaker or legislature or Board of supervisors.

     The answer is “no,” or at least “not entirely.”  I put aside  some very intriguing arguments to the effect that “intent” is, in principle, altogether irrelevant, that the law should be taken as a public artifact severed from any special connection with its maker, that it should make sense on its own, standing on its own feet, unmarked by the traumas of its creation.  However it was created you are not to try to seek out the (often elusive) creator and demand elucidation as if the creator meant more than it said.  You have the words; make what you can of them; and if in doubt, don’t look back but interpret them so as to further the good as you see it. “It is what is said that counts, not what is intended,” said a well-known Judge. (Unfortunately, the same Judge can be found to say “What he had in mind is what counts, not what he said.”  Legal maxims often come in pairs; pick the one you need!)

     I will not pursue this line, although it is powerful enough to merit more than casual dismissal. And, of course, it lends support to the activist view that consequences are what count.  It may be a tempting short-cut, and I do not really dismiss it.  But I am going to try to show that even if we consider that we should eke out the law with the intent of the lawmaker there are discouraging complications that will keep the anti-activist from finding happiness or salvation in “intent” or even “original intent.”

 1)  The mystery of multi-person statements.

      It is one thing, when we think of law as the command of the Sovereign, to think of James or Charles or Napoleon whose command we are to take as law. “What did he intend?” if it is not clear, is at least an intelligible question. But the law in a polity like ours is not the will or command of a single natural person.  What a legislature intended or had in mind is not the same thing as what I may have had in mind when I voted for it, nor what a canny draughtsman may have had in mind, nor what the most zealous advocate or the most grudging supporter may have had in mind. The “intent of the legislature” cannot be identified with what a particular legislator or even a group of legislators may have been thinking. (I seem to remember that a member of Parliament is not permitted to testify as to what Parliament may have intended by a law that he participated in passing.)

     While some will hold that only a particular individual or single person can be said to intend or mean something or have something in mind or be said to have a mind at all, I am inclined to flirt a bit with the un-individualistic notion of collective intention.  That is, I feel no need to apologize, or to hasten to explain away or banish ghosts from “we think” or “they thought” or “the committee intended” or “we decided.”  I find no terrible sin in saying “the legislature intended” or “the Court meant”—even though no one, including a member, is authorized to speak for the court in explaining what the court really meant beyond what it said.

     But even if one accepts the idea of legislative intent, how you would discover it is something of a mystery, and its complexity keeps the appeal to it from being an easy solution to discovering what the lawmaker or enactor of the First amendment may have intended by “religion” or “an establishment of religion”  or the enactor of the Fourteenth by “person.”  When the meaning seems clear, it is not because of “intent”; when you are driven to look for the collective intent you are not likely to find a conclusive answer. Not even a letter from Jefferson establishes that “they,” whose action is what matters, intended what he may have intended about “a wall of separation between church and state.”

 2) Historical vicissitudes and “Original” intent.

     The original intent, if I may use that expression, of those who enacted the First Amendment was clearly not that no government should abridge the freedom of speech, but rather that the newly formed Federal government was not to interfere with the States, which retained the power to regulate speech as they thought best.  There is really no serious dispute about this; it even saysCongress shall make no law…”  Then, after the civil war, came the Fourteenth amendment, which restricts the power of the States in some respects, although it does not mention the Bill of Rights or the First Amendment. Some have argued, rather creatively, that the Fourteenth somehow “transmits” the Bill of Rights, or rather “transmutes” it into limitations on State as well as Federal powers.  In addition to the overwhelming difficulty of finding out what was “intended” by the adopters of the Fourteenth, it is almost impossible to say how such transmission nullifies or modifies the “original intent” of a Bill of Rights that certainly cannot be held to mean the same thing when it denies powers to the Federal government only, leaving those powers to the States, and when it is to be read suddenly, years later, without explicit reference, as denying power to all government. You may be able to make sense out of the situation, but it won’t be by looking for “original intent,” either of the 1st or of the 14th Amendments.  The operative “intent” of a long-lived law is not always (to say the least) the same as the “original” intent, and the answer to what a law is now properly taken to mean is not always discovered by historical research.

     “Original Intent” seems either a redundant way of saying “intent” or, if it is a recognition that what a law means is subject to some historical battering, it is a way of reminding us that we do not always —a century or a decade later— interpret a law in the light of the original intention. To say that we should always stick to —or return to—the original intent is not merely to utter a conservative dogma.  It is a radical proposal of questionable merit.  Long established usage may be discovered to be a departure from original intent, but it is not at all clear that a return to the original intent should follow that discovery.  For example, the current interpretation of the “establishment of religion” clause is a wild departure from the “original intent.”  It is also probably the case that the adopters of the Fourteenth did not intend to include “corporation” under “person.” But it would take not an activist court but a hyper-active court to announce a return to the original intent in such cases. At some point, established usage, on almost any theory, displaces original intent—just as current linguistic usage displaces and need not blush before the archaic.

     Historical questions about fundamental laws should remind us about how difficult questions about the intent of the “amending power” are.  Research on these matters is usually inconclusive and policy-driven in the bargain.  On less ancient or fundamental laws, “original intent” really shrinks to “intent” and on the “intent” question, as I have suggested, the mystery of multi-person utterances will keep us from finding in “intent” the infallible cure for legislative ambiguity.

 3) Intending the instance and intending a class

     The context in which the 14th Amendment decrees that no “person” shall be denied the equal protection of the laws clearly suggests the intention to cover black persons.  But “person” covers more than “black person” and let us even suppose that the Amenders “intended” more.  But what more? How much more?  They intended “person,” including black person. Did they intend  “persons of Japanese ancestry?” Female persons? Minor persons?  Artificial persons?  Fetal-stage persons?  Illegal alien persons?  While it is clear that more than “blacks” was intended, the attempt to settle the scope of “person” by finding out what the Amenders “had in mind” or “intended,” is a hopeless enterprise. The law says “person,” presumably the class of persons; the class of “persons” includes black persons; it includes more than black persons; we don’t know how much more; and we cannot find out how much more was intended by that amorphous mass by whose action the Amendment was passed.          

     Consider how you would try to discover whether “person” in the 14th Amendment was intended to include “fetus” and you will probably discover that you will assess every argument about “intent” in terms of whether it is compatible with your policy view about abortion.  You will not know how to decide about the defining characteristics of “spouse” without knowing how it squares with your view about whether a same-sex lover should be treated as a wife or husband.  Such a simple thought-experiment conducted in diligent privacy will reveal the force of the activist insistence on the determinative power—and the unavoidability— of the policy question.

     This is a very short brush with problems plaguing the attempt to apply an existing law or set of laws to a changing world. They are, as anyone who has grappled with them knows, fascinating problems, and I do not pretend to do full justice to them here.  I am trying to show how the first item of the Rule of Law Creed—that the question for the court is always of the form “is X in accord with the law”—seems to turn into a policy instead of a rule-conformity question at a surprising number of points, so that a court—even without policy-making ambitions —finds itself always confronted with questions of policy.  Contradictory rules, soft words like “reasonable,” claims to the status of new instances (a fetus is a “Person;” a live-in same-sex lover is a “spouse…”) present questions not simply answered by the “discovery” of what the law requires untarnished by “decisions” about what the law should be taken to mean.  Not even “strict construction” and “original intent” can steer us away from the rocks of policy to the tranquil waters of “what the law really requires.”  So that the very foundation of the Rule of Law creed—that judicial questions are all simply rule-conformity questions—seems, at the very least, to be very shaky, if not absurd. We begin by asking “Is X in accord with the law?” and soon, at a number of different points, we find ourselves deciding, having to decide, whether A is better than B.

     I pause here to mention two general points.  First, this is a highly simplified account of a generally complicated legal reality.  If I had been able to successfully defend the fundamentalist Rule of Law view in these artificially simple terms, I would invite the charge that in the more complex “real” world it would be a different story and that the simple account is misleading. But if even in a simplified form the story doesn’t hold up there is no point to considering complexities that would make the outcome even more obvious.

     Second, there is the question of whether all these ambiguities or difficulties are marginal, whether they exist only at the fringes of the system or are, on the other hand, pervasive features of the legal order. The answer is that it is not some local flaw or accident that creates these opportunities for judicial policy-making.  It is, rather, the focussing of attention at a particular point for some political or social reason that makes that point seem subject to unexpected ambiguity, in need of interpretation and all the rest.  It could happen anywhere in the system, not merely at some unguarded weak spot. And this precludes a possibly attractive easy way out.  That is, we cannot really say that most of the law is unambiguous and that activism is a merely marginal option. 

II.   Correct Answers

     The second item in the simple rule of law creed—that to the proper rule-conformity questions there are correct answers—might seem so obvious that it is puzzling that I bother to list it. You either exceeded the legal speed limit or you did not, and the court is supposed try to come up with the correct answer. Sometimes it does, sometimes it doesn’t.  The justification of procedure is that it helps, the objection to procedure is that it impedes, the discovery of the correct answer.  In principle, in an unKafkaesque legal order, the statement that you have violated the law is true or false.  A law may be wise or foolish, fair or oppressive, morally worthy or unworthy of obedience, but, unless it is so flawed that it does not count as a law, there is a correct answer to the question of whether it has been violated.  Common sense, at whose touch so many complexities vanish, will insist on it. Rule of Law fundamentalism insists on it.  Nevertheless, even this simple clarity will become clouded.

     The force of this position is seen most easily when we consider a single rule or law disentangled from its context. Did you break a law—did you fail to stop at that red light, did you limit someone’s speech, did you not give timely notice…someone adduces a rule or a law you have broken.  You either did or didn’t.  The answer that you did is either true or false.  There is a correct answer. 

     But now things get confusing.  In spite of all the work that goes into formulating a simple “question” for the court out of the complexities of an actual dispute or conflict, we will find that the question may become “Did you violate the law,” not a law. The distinction is between a single rule and the system of rules of which it is a part. A student may violate a school rule about not wearing armbands in class. But that rule has to accommodate itself to other rules—the First Amendment, for example—and by the time we take into account the impact of the whole system of rules brought to bear on the  application of the simple rule we began with, we, or a court, may find that the student did not step outside the bounds of the legal order, is not in violation of the law after all.  He broke the rule about armbands; he did not break “the law,” taken as the system of relevant rules and operative principles, properly interpreted and reconciled.

     The question now is this:  There seems to be a correct answer to whether what the student wore violated the dress code. But is there also a correct answer to whether, taking everything into account, the student violated the law?  I do not mean a “wise” or “liberal” or “conflict reducing” answer; I mean simply a correct answer, an answer that interprets all ambiguities correctly, that resolves all interpretive conflicts correctly, that does everything right from beginning to end so that we can be as confident of the answer about the law as about the prima facie violation of the rule that starts the process.

     It seems quite natural to say that the judge was correct in ruling that I was speeding.  It is not as obvious that the judge was simply “correct” in ruling that I was properly stopped by the police at a road-block check on illegal aliens. “Hard-line” or “liberal” are not only something we might say in addition to correct or incorrect; they seem to squeeze “correct” out of the picture, to preempt the field of comment.  Awareness of the complexity involved in judging conformity to a system of rules, something like calling an act “unconstitutional,” makes us hesitate to call such a judgment correct or true or false.

     Consider our familiar example: suppose a court rules that our old friend Jones is indeed Smith’s spouse, entitled to medical coverage.  You might rejoice that it came out that way and that you can now tell others that their live-in same-sex companions are covered.  And it is true that they are now covered.  But would you really say that it is, in some serious sense, true that Jones is Smith’s spouse and that fortunately the Judge discovered it and correctly told the truth? As it is true that I did not stop at the stop sign?

     I have raised the question of the possibility that “correctness” may not apply to complex situations when it is a question of being in conformity with a “system” of laws or being “constitutional.”  My professional legal friends are always reluctant to characterize a constitutional decision as simply correct or incorrect. Is it true that there was a right to privacy guaranteed by the Federal Constitution before the court said there was, and the court was simply correct in saying so?  Or was it really “there is from now on!” —a good example of “judicial activism” or judicial policy-making filling in for the absence of a “correct answer”?

     Apart from the question of whether the complexity of a system and the necessity of interpretation weakens the sense  that a judicial decision is simply correct or incorrect, there is the familiar situation, discussed earlier, in which a rule-conformity question has been displaced by a balancing problem, a problem of deciding whether one alternative is better than another.

     Is there, in the end, a correct answer to a balancing question or to questions like whether capital punishment or abortion or the exclusion of religion from public education are good things?  We enter into the hazardous domain of “value,” a domain in which, we are told, it is archaically elitist to suggest that anyone is really right or really wrong, or more right or wrong than anyone else, in which the claim that your view is the correct one is dismissed as an expression of dogmatic intolerance.  I am not going to argue the matter here.  I will only point out that if you think there are indeed correct answers to such “value” or “moral” questions you will want judges who are properly attuned to them and you might see “moral correctness” as a qualification for appointment, displacing “moral neutrality.”  And if you do not think there is an objective value or moral “rightness,” and if you think that judges must necessarily deal with such matters, you may consider it important that judges are appointed who at least share your own views.

     It is here, baffled in the search for the clearly “correct” answer and disenchanted with judicial answers that seem thinly disguised partisan political pronouncements, that we sometimes encounter the well-merited and oddly comforting characterization of judges or decisions as “statesmanlike” or “judicious” or even “wise”. Such characterizations are hard to analyze, but they suggest neither narrow or legalistic correctness nor mere political partisanship. “Judicial statesmanship” must certainly be compatible with the conception of the Rule of Law.  But it seems to offer us “wise” answers instead of “correct” answers. It may be difficult to object to that substitution, but the hope for, the dependence upon, judicial wisdom is not without its threat to the mundane conception of the Rule of Law.

III.  Proper Judges    

     I now come to the third item of the simple Rule of Law creed: It is possible to find judges competent to give the correct answers to the proper questions.

     What is required is a practitioner who is not swamped by the partisanship that is built into the conception of the lawyer as a hired gun in an adversary system. I hold rather doggedly to the conviction that, in spite of protestations, our law schools are better at training lawyers than preparing judges for their functions.  But in any case it is clear that a judge is not supposed to be an advocate for a client, preparing briefs and arguments—“opinions”— for a partisan position. He is supposed to be—and here we have trouble describing it—neutral or non-partisan or impartial, or unbiased, or objective, or whatever we settle on as fitting for the agent of the rule of law in the midst of a distracting gaggle of partisan advocacy. 

     So we can expect to hear, and will not be disappointed in the expectation, the annually rediscovered shocking insight that  all men and women are human, prone to error, to bias, to subjectivity, to neurosis, and therefore, that to be a Judge is impossible. In its more pretentious form this insight is decked out with tattered philosophical and sociological fragments—the mind forever shut out from the world of things-in-themselves, warped into merely human categories, culture-conditioned, linguistically blinkered, sex and class distorted, ego-centric. In the ordinary world we occasionally hear the cry “kill the umpire!” but in that world we are seldom told that, since we are human, baseball is impossible or that umpires should be home-team activists—or are, in fact, if they would only admit it.  But in the more richly imaginative academic world …Judges? Objective? Who? Whom?  We are all partisans… the familiar half-baked academic enlightenment that, in students, we call sophomoric.  I am not going to take this position seriously.  Whatever we are—including human and ego-centric and all that—we are capable of being scientists and doctors and referees and umpires and even judges.  The Rule of Law is the kind of game that can be played with the kind of people we are—properly selected, properly educated, properly encouraged.  If we fail here it is not because we are asking humans to behave like angels.

     The real difficulty is with the theory of judicial obligation, with the delineation of the judicial task, with the mastery of the interpretive art from the point of view not of the lawyer but of the judge.  There is a radical difference between the perspective of the lawyer and the perspective of the judge.  The lawyer is someone with a client; the judge is someone with a problem.  But I remember a newspaper account of an interview with a newly appointed appellate court Judge.  “I’ve been a civil liberties lawyer,” he said, “and I will go on being a civil liberties lawyer.” No one seemed to notice the moral and intellectual absurdity of that remark.

     There may be judges who have an inadequate understanding of their function. There may be an occasional rogue judge who indulges himself in the arbitrary exercise of his discretion.  But more common, more troublesome I think, is the judge who shares a pervasive utilitarian bent, who believes that one is always to do, in any particular case, whatever he thinks will promote the greatest good and that, in the end, one is to bend even the apparent requirements of the law to the advancement of the “good” as one—who else?—sees it.  So that even if there is what I have called a “correct” answer, the judge, as a moral activist, should, it is said, provide a better answer. He should be “result oriented,” not neurotically fixed on mere correctness.

     The “proper judge” called for by the simple rule of law creed is not an impossible dream.  But everything does turn on his education (Law School education is not a cheering spectacle) and on the judicial theory with which he approaches the inevitable discretion that any complex system of rules imposes upon its administrator or guardian.

 ***********************

                                 

     I began with a delineation of the basic Rule of Law creed,  according to which the court is supposed to deal only with questions of rule-conformity, pronouncing correctly on the legality or constitutionality or legitimacy of actions, speaking through a special class of persons trained and competent to practice the interpretive arts that serve the administration of the rule of law, restrained by self-discipline from the temptations of policy making, leaving that to those whose proper political function it is.

     In short order the painful inadequacy of the creed became apparent, essentially because of the irrepressible ubiquity of the policy or balancing question.  The naive “rule of law” image of the judicial umpire gives way to the more realistic one of a rhetorically constrained political agent making policy decisions.

     But if, in this conflict, the activist view seems to carry the day, it does so at a price.

      First, by dissolving a rule-conformity into a policy question it strips the mantle of “legitimacy” from the political process.  “Constitutionality” itself is the concept of rule-conformity writ large and it lies at the basis of American governmental legitimacy.  The President was able to say, as he reluctantly prepared to enforce a school desegregation order upon a recalcitrant southern state, that “constitutionality” over-rode judgments of policy and that he was pledged to defend a constitution that was something more than the mere policy views of nine non-elected judges.  As a society we are disposed to play by the rules even if it means we may lose a game. But not if we think the umpire is making policy decisions in the guise of calling strikes.  Respect for the Court and acknowledgment of the need to abide by its decisions—a social habit of immense utility— rests on the naive view of its function as the guarantor of legitimacy, as the enforcer of the Rule of Law.

     Second, the displacement of rule-conformity by policy questions strips judicial review of its compatibility with democracy. There is nothing undemocratic about a court, even an unelected court, enforcing the rules of the constitution against violations by elected legislatures and presidents—any more than it is undemocratic for police to enforce traffic laws against citizen-drivers.  It is a different story if judicial review permits the court to have the last word not on rule-conformity but on policy questions.  If the activist view is right, the acceptance of judicial review by a democratic polity rests upon ignorance of what the court is really doing.   Even some activist judges, aware of this difficulty, go to a lot of trouble to maintain publicly that they are not making policy. The plea for judicial “independence,” on the activist view, is a disguised plea for “undemocratic” judicial supremacy.

     Third, the activist view strips away the notion that judges are to be chosen—and respected— for some “professional” judicial character, for their objectivity or neutrality, regardless of their underlying political inclination.  We are, perhaps, increasingly open in our concern about a judge’s politics, but there is still a feeling that we really shouldn’t be, that it is important to have a disinterested referee, not a committed liberal or conservative warrior. On the activist view good politics may outweigh mere technical competence.  But it is a testimony to the strength of the Rule of Law position that we are deeply reluctant to strip the court of the aura of objectivity and policy neutrality upon which its unique position depends.

     Fourth, the argument that we should submit to the jurisdiction and judgment of International Tribunals or World Courts loses, on the activist view, whatever force it might enjoy under the fundamentalist view.  Why put ourselves at the mercy of political judgment by foreign judges whose political balancing comes pompously disguised as objective pronouncements about what “The Law,” or International Law, or a Bill of Human Rights requires?

          The difficulty is that respect for the Rule of Law, for the pronouncements of Courts, seems to rest upon popular misconception about the whole business. What I have called the fundamentalist or the naive view is, more or less, the general view.  Insiders generally may not share that view, but may well think it a good thing that the public has the standard illusions.  Lawyers are, after all, the clergy, the beneficiaries, of this particular religion, and hesitate to de-mythologize aloud on their own territory.  I have even heard lawyers—who can explain with great zest how judges in their solemn robes are really “politicians in drag”—declare publicly, without blushing, that the glory of our system is our non-political Rule of Law presided over by a Judiciary whose independence must be protected against the intrusion of politics.

      De-mythologizing may be great fun, but it is not without cost. An unabashed Activism must eventually pay the full price  for the abandonment of the fundamentalist Rule of Law conception. It would, if understood, undermine the respect upon which the constitutional system and the Rule of Law depend.  That respect can be protected or restored only as Activism retreats from its own fundamentalist or unrestrained policy-making view.  And that is why the story cannot end here but must move, as I will now suggest, towards a theory of selective intervention.

    

  **************

  PART TWO

     The Fundamentalist view of the Rule of Law cannot survive the activist challenge; its basic propositions are vulnerable to skeptical attack; but rejecting the view exacts a price we are reluctant to pay.  In this situation, unwilling to continue to argue for the truth of the fundamentalist view and unwilling to accept the price of abandoning the traditional ceremonial empowerment of the judicial order, we may take a clue from what happens in the life of religion.  We discover that the fate of religion does not depend on the fate of fundamentalism. It does not depend on the truth of the assertion that Mrs. Lot was turned into a pillar of salt.  It may be better served by a non-fundamentalist reading that if you insist on looking back in longing to the pleasures of your dubious past you may become, like Mrs. Lot, monumentally bitter—a more profound insight and, no doubt, what the inspired creator of the parable really intended.   So I will now turn from the hopeless defense of the fundamentalist position to the delineation of a more complex conception of the Rule of Law.

      “Why bother?” you may well ask. Why not, since I am treating this as a “religious” problem, go directly from a refutation of the fundamentalist position to an acceptance of an honest atheism, an unabashed and eager activism?  Because, I suppose, I consider an atheism based on a refutation of fundamentalism as just another case of fundamentalism, as the same error in another direction, as the same missing of the point that, clumsily served by fundamentalism, is worthy of being better served and defended. And besides, I was bitten by Dostoyevsky when I was young, and I still worry about the Smerdyakov syndrome.  You will remember that his highly cultured half brother, Ivan, went around saying “If there is no God, everything is permitted.”  Ivan concluded that everything was permitted, but having been raised a gentleman, continued to act more or less like a gentleman.  But Smerdyakov was not raised a gentleman, and having heard on Ivan’s authority that everything was permitted proceeded to do what no gentleman would do.  He killed his father.  There may be some Smerdyakovs around who, having heard that the Rule of Law is a fairy tale, may conclude that all judicial bets are off and seek judicial appointment.

     To put the matter in less Dostoyevskian terms, the rejection of Rule of Law fundamentalism seems to unleash a kind of naive or crude fundamentalist Activism.  It seems to convert the discovery of the existence of policy-discretion in administering the law into a license to indulge in unabashed policy-making, to become substantially “result oriented,” restrained only by the need to avoid giving too much scandal.  It is sad that one should escape one form of fundamentalism or naivete only to fall into the embrace of another. So I will try to sketch a defense of a tenable, non-fundamentalist Rule of Law position.  My hope is to rob fundamentalist Activism of some of the fruits of its victory over its fundamentalist opponent by suggesting the constraints that a more sophisticated version of the Rule of Law position can oppose to, can defend against, an unrestrained Activism.  And, by so doing, to lessen the great price we must otherwise pay for the weakening of the conception of the Rule of Law in its Fundamentalist form.       

     The Rule of Law view, as I have said, rests heavily on the theory of separation of powers. This is seen as pre-supposing functional differentiation. Each tribunal or branch of government is to limit itself to its own kind of task, not to intrude on the function of another.  The prohibition against a Bill of Attainder, for example, is an explicit ban against the performance by a legislature of a function reserved for the judicial branch —finding someone guilty of a crime. We worry about the President’s possible invasion of the war-declaring function of Congress, as we might worry about legislative meddling in the professional conduct of the war properly under the direction of the President as commander-in-chief. It is the function of the Regents, not of the legislature, to govern the University, and the legislature may find, to its frustration, that it cannot hire or fire a professor.  This is all very familiar, and I need not parade examples.                           

     The justification for the separation of powers is not only the wisdom of avoiding undue concentration of power but also the fact that different kinds of tasks take different skills and are best carried out by institutions adapted to those tasks.  We may think laws best made by gregarious types who like to run for office and please constituents, not by cerebral law school graduates who are good at long written arguments and who end up in or around courts.  Different characters for different roles, different procedures, different recruitment, different training, different discipline, differently cultured—characters who will be content to play their own parts, for which they are specially or even uniquely suited, in a great division of labor.  It is in such a context that a violation  of the separation of powers, of functional differentiation, is seen as a major sin, an insult to the dignity of human interdependence, and not merely a case of jurisdictional imperialism. So, just as a legislature is not to try someone for breaking a law, a court is not to make, modify, amend, or improve a law.

     But merely to state this is to make us aware of how much the purity of this functional differentiation scheme has become sullied —Legislatures conducting foreign policy, staging quasi-judicial trials or investigations, Courts administering schools, prisons, employment policy, Administrative agencies making masses of rules indistinguishable from laws and conducting trials almost as if they were courts.  We may still classify tribunals in traditional ways, but functional purity has long been lost. Messy, perhaps, and even sad; but too late to weep over.

     However, there is a sibling notion, very American, and very much alive—our familiar “checks and balances.”  It shares with “separation of powers” a hostility to the concentration of political power and to its untrammeled exercise.  But it has no commitment to functional distinctiveness.  In this respect it is like “distribution of powers,” referring to the allocation or parcelling out of power among a myriad of jurisdictions—federal, state, local—without functional concern. “Checks and balances” seems, in fact, to suggest a general overlapping or duplication of function. The same demand is made over and over again in different contexts, before different tribunals.  Everyone seeks the most accessible or sympathetic forum, altering the argument as required but pushing the same point. If the school board doesn’t give you what you want, you try to get it from a court or a legislature.  Social policy is what emerges gradually and cumulatively from a gauntlet of decision-making tribunals, each modifying the other, none completely dominating the scene.  The courts are said to be part of this complex system, concerned, as is every part, with “policy,” required by tradition and some special circumstances to put its arguments in a judicial rhetorical form.  It works under special institutional constraints, but it is part of the political process, not apart from it, part of the process by which tribunals check and balance and offset each other.

     It might have seemed strange, not too long ago, to include courts in the political hurly-burly.  But things have changed.  Readers of classical detective stories will remember lawyers as stodgy conservative types, bright enough to read wills to crestfallen family gatherings, honest, upright, seldom inclined to take liberties with tradition.  They did not go into the practice of law in order to reform the world by an imaginative employment of judicial tactics, did not see the court as an instrument of change or as a battlefield in a war against the established order.  But now!  Hordes of lawyers debouch from the portals of law-schools, flooding the country, dedicated to causes, to change.  They are not heading to courts in order to be referees, above the struggle over policy.  They are going where the action is, where you can make a political difference without being elected or climbing a long bureaucratic ladder.  They are heading for one of the places where the law, if not made, is at least changed or reshaped, to where the benighted legislature (also lawyer-infested) can be checked and the obdurate executive balanced—to one of our more accessible powerful political institutions in a system of checks and balances.

     The upshot is that “checks and balances” may capture the spirit of our situation more accurately than does “separation of powers.”  To the extent that it does, “functional distinctness,” the conception of a unique judicial function tied to a special class of judicial questions, fades into the background. But it does not altogether bow out of the picture.  It still has a significant role to play.

     We should remember that we are, after all, talking about courts and that courts are rooted in the great traditional task of deciding whether someone has broken a law or committed a crime, that the trial that looms so large in the popular mind is still something that courts are engaged with, day in day out, and that, in that storied world, functional uniqueness is most clearly demonstrated. Here, the fundamentalist Rule of Law view is not to be dismissed simply because of complications that may arise at the more problematic appellate level.

     The persistent strength of the fundamentalist model is rooted in its simple appropriateness at this working level. And it is because of this appropriateness that the court gets, almost uniquely among political institutions, the great moral authority, the moral capital, it carries with it and spends when it starts to play more complex games. We start with the image of the impartial judge or referee and continue to respect and defer to him as a referee even after he has become, without changing uniform, a player on the field. But before that, we expect from the court or judge reliable predictability, an absence of judicial surprises; policy novelty, originality, is out of place; imagination an intruder.  In this world we should expect that the claim that the law requires that a same-sex lover be treated as a “spouse” would be received with  “Quit your kidding.  If you want that, get the city council to change the law.”

     It is beyond this level that the temptations of policy-making flaunt themselves.  We move into the world in which the functional differentiation of “separation of powers” yields place to checks and balances.  Without distinctive functions, the injunction “Do not overstep your function!” loses its point. In the world of check and balance what is there to do but check and balance? What is left of the old sense of limited role?

     To sum this up in terms that I have used earlier, within the separation of powers framework, the unique function of the court is to confine itself to questions of the form “Is X in accord with the Law?.”  In the check and balance framework, questions are normally of the “Is A better than B” form, that is, questions of “policy.”  The necessary involvement of the court with policy questions means, as I have argued, the defeat of the fundamentalist Rule of Law unique-function view and the movement into judicial “balancing” in a world of checks and balances. And the question now is about how the courts should behave in that world.  

     At this point we encounter the familiar notion of “judicial self-restraint.” It is frequently coupled with “strict construction” and “original intent” in the anti-activist litany, but it introduces an interesting new note. We move from language-centered notions (meaning, intent, construction, interpretation…) to concern with the attitude—psychological or political-theoretical—of the judge toward his function, and it is precisely here that the serious non-fundamentalist Rule of Law position must establish itself. In the end, and I stress this point, the Rule of Law  must be based not on the theory of language but on political theory.  

      We begin with “self-restraint,” a notion that is almost gratuitous when the court is acting within the “separation of powers” framework. There, it is a reminder to stick to the proper question, to its own non-policy function. And also, perhaps, a warning against foolish zeal, against losing a sense of  de minimis, against pushing things to absurdity. As when a court finds that the equal protection clause of the Fourteenth Amendment prevents the holding of a father-son or father-daughter banquet, or that the First Amendment prohibits a moment of silence at the start of the school day.  Absurdity always lurks in the vicinity of a foolish consistency, and we are not required to be absurd.  Urging self-restraint against that might, if it is sadly necessary, make sense.

     But we cannot seriously accept that in a “balancing” world the court is to take “self-restraint” as meaning that the court is to turn a deaf ear to the pervasive invitation to join in the game of policy-formation.  Such judicial minimalism would reduce the court, as an institution, to triviality, and no one, I think, who has any conception of the relation of our courts to our other political institutions would seriously defend such a position.  Self-restraint remains an important notion but it needs to be supplemented or even supplanted.  What we need is a guide to the exercise of restraint, a consideration of the circumstances that require or justify judicial policy-making.  What we need, in short, is a well-considered Theory of Selective Intervention. I will not try to develop such a theory here but I will present a small sample of the sort of things it should deal with:

     1. The court is about the only institution in a position to deal with jurisdictional conflicts between state and federal and between administrative and legislative tribunals or agencies.  Can a president or governor refuse to spend what a legislature has appropriated ?  Can anyone order a legislature to raise taxes?  Can the department of labor order a university to hire someone?  Should a court extend spousal coverage to a same-sex lover? Questions of this sort come up all the time; they are policy questions.  The court is appealed to.  Can it refuse to decide who should decide what?

     2.  The court is expected to exercise more policy-responsibility in matters closely related to the justice system—adequacy of process, trials, evidence, punishment, jails, etc,—than in other areas.  Should it not?

     3.  Should the court try to remedy what might seem to be a structural failure?  Redistricting is called for, but legislatures are supposed to do it themselves, or to themselves.  Naturally, they don’t.  The court finally decides, on policy grounds, to find a way to put an end to a sort of scandal that no one else could deal with. Improper activism?  Wise intervention?  I distinguish such structural difficulties from situations in which the “barbaric” state of public opinion is deemed to be an obstacle to enlightened social policy.  If the people are too backward to abolish capital punishment, should the court do it?  I do not consider public opinion a structural obstacle, but some might.  Should the court?

     4.  Should the court do the country a service by taking on questions that seem to be too difficult for elected politicians to handle?  Sometimes the whole country seems to turn, as with a single neck, to the court, imploring it to handle matters like abortion or affirmative action that no one else wants to get burned by.  Is there a “hot potato” function?

     5.  More respectable perhaps is the assumption by the court of the role of the Roman Tribune, the special guardian of the weak, the oppressed, the minority, the group unable to work the political process to its own salvation. Is it to act as an additional check against the powerful, to tip the balance, sometimes, in favor of helpless virtue?

     6.  The court is sometimes thought to have a policy role in bringing the law and constitution “up to date.”  The constitution is several centuries old and, it is said, if it is to be a living document and not an irrelevant antiquated charter it must be reinterpreted, adjusted to modern times. This point bears chiefly on “constitutional” provisions that can only be formally changed by the cumbersome process of amendment, and since that is really impractical it may be said to be up to the court to effectively modernize the constitution by interpretation. This sounds reasonable, but it is not without problems.  A new situation does not always require changing old rules.  The ancient  “Do not lie!” can be held, without change, to cover “Do not lie on the telephone!”  “Do not Xerox without permission” does not put a strain on “Don’t steal.”   Our ancient “freedom of speech” seems still elastic enough to cover lots of modern ground.   Deciding whether a rule covers a new instance, as discussed earlier, goes on all the time.  But beyond this sort of thing, is the court, acting on its sense of what modern America requires, to bring the constitution up to date by deciding that we can no longer afford freedom of the press, that we should ignore the right to bear arms, that we should curb abortions by proclaiming that a fetus is a person, that “color-blind” should be replaced by “proportional?”  Do we want the court to be a surrogate constitutional convention?

     7.  Finally, I list among these random items a position that even conservative anti-activism warriors might approve of.  I call it “corrective activism.”  Whenever the court has been living through a period in which it has been accused of blatant activism, and another President succeeds in re-coloring the court with judges of a different political complexion, we may not be treated immediately to the demonstration of a non-activist court at work.  First, it may be thought necessary to undo the activism of the previous court. Should the new court accept and protect the policy excesses of the late liberal or conservative court?  Of course not!  So first, a brief period of “corrective activism” to get everything back to where it was supposed to be before they messed it up.  But the agenda of corrective activism, even if you agree with it, is a program for a long haul, and a court so engaged will look like just another activist court.

     I list these familiar problems to suggest a range of questions to which  “judicial self-restraint”—if that means more than “be careful”—is hardly an acceptable answer.  It is not an answer required explicitly or even implicitly by the constitution itself.  It is not required by linguistic or legal or political or moral theory.  In the world of checks and balances, out from under the limits of “separation of powers” what is needed by the court is some practical and theoretical wisdom about “selective intervention.”  A confident sweeping “activism” would be fun, but irresponsible. A dogmatic identification of the “rule of law” with a fundamentalist or even puritanical eschewing of all judicial decision making beyond “rule-conformity” would be an unjustified form of self-denial, a form of irresponsibility.   If we really want to explain it all to those who yearn for “the rule of law and an independent judiciary,” we need to go through the traditional myth, through the demythologizing process, to end finally with an understanding of checks and balances and the theory of selective intervention. Such a theory should appreciate the relative merits of legislative, administrative and judicial institutions in terms of the intelligence and sensitivity that, in their contemporary form, they bring to the varied tasks of government.  Our judicial institutions, culminating in the Supreme Court, may well be our best hope for injecting some reflective wisdom into our public life, and there is really no canonical theory above partisanship that stands as a bar to “selective intervention.”

      “Judicial activism,” as a reproach, might re-emerge as the tendency to overstep the limits of a reasonable theory of selective intervention.  Lest this appear too bloodless an outcome, I suggest that it really preserves the point that generates the greatest indignation among critics of activism. They are likely to be more outraged that a court would presume to extend spousal rights to same-sex lovers  than over that result enacted by the city council.  A legislative decision moving from non-discrimination toward “racial balance” might, even if opposed, be accepted with relative equanimity compared to the fury evoked by a court moving in that direction by a supposedly non-political exegesis of “equal protection.”  Similarly with questions like capital punishment and abortion.  We may be, in short, prepared to accept at the hand of the familiar political process what we are unwilling to accept from an activist court taking the matter out of our hands. So the cry “a court shouldn’t be doing that!” can still arise with undiminished fervor, but we can focus more clearly on “why” or “why not” rather than on questionable abstract views of the process of construction or interpretation.  An “activist”, it is worth repeating, is not simply a judge who is, unavoidably, involve in the policy world,  but rather, a judge who oversteps the limits of a reasonable theory of selective intervention.  The development or delineation of such a theory is, under present circumstances, among the more urgent tasks of legal or constitutional theory.

    

     Any legal system is at least two-layered.  There is a set of positive rules or laws—written laws and written constitutions, and there is a set of rules and principles about those rules, and they are usually neither explicitly nor formally enacted.  They are, nevertheless, a fundamental part of the legal order, and  they are not to be waved aside as if their intrusion into the world of positive law is a result of judicial misconduct.

     The power of the court to declare a law unconstitutional is merely implicit; “separation of powers” is not mentioned in the constitution; “strict construction” and “original intent” as proposed guides to the court are not mentioned in the constitution; that the court, in interpreting a law, should try to carry out the intent of the lawmaker, is not mentioned; that the court has a special role in protecting the “federal system” is not explicit…these are examples, a small part of the context within which the constitution and the positive law exist and make sense.  

     The strongly urged view of some opponents of “activism” that the court should stick to what is explicit and avoid all dependence on the what is implicit in the “spirit” of the constitution or the “Higher Law”— that view is itself an argument about an implicit part of judicial theory and does not enjoy a “preferred position” in the field.  What is implicit is often crucial.  For example, the constitution speaks of the “legislative power.”  It does not define it or tell you what it means.  It does not say that Congress has the power to “investigate,” but it certainly makes sense to say that the power to make laws implies the power to investigate the need for laws, even though not explicit.

     Again, the legislative power is, presumably, the power to make laws. Is a “law” anything the legislature chooses to enact?  Or does a “law” have certain characteristics so that only what has those characteristics—enacted or not—is a law?  Suppose there is a long tradition that a law is a reasonable act of the lawmaker aimed at the public good. Would not a court sometimes have to say “this is arbitrary and unreasonable and therefore not a valid law?”

      Readers of Paradise Lost will remember that Eve, confronted with a divine command, was led to think about it and concluded that a command to avoid knowledge of good and evil simply made no sense, was unreasonable.  “Such prohibitions bind not!” she declared, firing an opening shot in the eternal battle between the sovereign Will and the demands of Reason.  It is no longer a very original sin when a court today rejects a governmental law or act as unreasonable, even though an enemy of judicial activism might condemn the frustrating of the sovereign’s will by such an appeal to the higher law. Eve may not be celebrated as the patron saint of judicial review, but the Rule of Law must include implicit elements of the Higher Law that the Court is to incorporate in the legal order, not by a rebellious bit of “activism” but by a necessary act of judicial piety.  “The Higher Law,” implicit in any system, is a reminder that even the Sovereign Will cannot altogether escape the demands of Reason, although even as we seek to restrain the governing will within a tight structure of rules, we come to realize, even if we are devoted to the Rule of Law, that we cannot write it all down and do just what it says.

     The defense of the Rule of Law against Judicial Activism is fought in two lines of trenches.  The outer line is what I have presented as the fundamentalist position.  I have tried to show that, in the end, that position cannot be successfully defended.  The battle, however, does not end there. The fall-back position is “judicial self-restraint.” But as functional distinctness and the separation of powers give way to “checks and balances,” I have suggested that “self restraint” must be displaced by doctrines of “selective intervention.” Conflict between the moods of activism and restraint will continue, but the argument should focus on the appropriateness of intervention in different kinds of cases rather than on positions—like “strict construction” or “original intent”— involved in the old fundamentalist view.  

     I confess to being rather fond of the old fundamentalist Rule of Law view and even find myself wishing it were all true.  It is not my fault that it is not and that we must be satisfied with the consolations of thoughtful selective intervention.  But there is also something satisfying in the realization that Judicial Activism is not merely or necessarily a form of willful misbehavior but that it grows out of a shared realization of the inadequacy of Rule of Law Fundamentalism and that its proponents, if they do not succumb to the temptations of their own Fundamentalism, can join in a common development of the theory and practice of Selective Intervention. For that is the task that faces us after we stagger out of the trenches of  yesterday’s battle.               

***********************

May 20, 1991      

© Joseph Tussman   

Why Should We Study the Greeks?

Sometimes we are called upon to defend, and therefore to think about, what we have long taken for granted.  This, for me, is such an occasion. As long as I can remember, I have loved the Greeks, have read them over and over, have taught them upon the slightest excuse to students who have seldom complained. Students are, no doubt, too polite to complain, but it is possible that they actually enjoy  the  experience.  They may  be  relieved  to discover that the Greek classics, as classics generally, are not spiritual  or noble works to whose  level  they must  struggle  to elevate themselves for a time before sinking back into the more congenial mud baths of real life.  They find themselves regaled by stories of sex—heterosexual and homosexual—of greed, ambition, war, murder, treachery, heroism, love—discovering an ultimate source of that universal art form, the soap-opera.  All the familiar plots and the broad array of human types—golden playboys , strong ruthless women, clever plotters, stolid warriors, garrulous oldsters, young lovers , cowards , heroes, creators , thieves—they are all there—all the great roles and all the great stories.  It is entertaining enough, and even, if we are inclined to think, thought-provoking. When one is reading the  Greeks  the  need  to  justify  the activity does  not seem to arise.  But if we ask, today, why we should read the Greeks, why they should play a central part, or at least serve as a starting point, in our college education, mere habit and mere pleasure cannot be a sufficient answer.

In our world, full of rapid change and novelty, struggling with a flood of innovations in science and technology, shaken by changes in attitudes about gender and the family, troubled by once-unnoticed forms of oppression, newly aware of our insensitivity to the dignity of ethnic minorities, frightened by our irresponsible power over biological destiny and precarious environment, bewildered by instantaneous awareness of what is going on everywhere, dazed by an enormous knowledge-explosion, threatening ourselves with disaster amidst the ruins of grandiose ideologies—in such a world it may seem oddly unreal to suggest to a responsible generation of intelligent college students that they actually turn from these urgencies to study the Greeks. There is much hand-wringing about our lack of computer-literacy and the deficiency in mathematics and science that may doom us to subservience to foreigners toiling with grim energy to undermine the once-envied standard of living seen as the proper consolation for our fading spiritual supremacy.  But among all the complaints about our educational failings few tears are shed over our dust-covered neglect of Homer or Plato or Thucydides.  Why, under our circumstances, should we bother to study the Greeks?  That is not an easy question to answer.

First, let me put aside or dismiss some perfectly good answers as not really good enough. The Greeks  are  interesting, but  the fact that you find something interesting is not always a sufficient  reason for  doing  it,  especially when  there may be something more important that needs doing.  There are many Great Books,  but that a book is great may not be a good enough reason for  reading  it  at  a particular moment  in one’s  life.  So I put aside the facts of interest or high quality as true enough but as not a sufficient reason to give them an important place in modern education.

I also put aside, with some hesitation, the rather odd fact that  the ruling class of the small  island that presided over an Empire upon which the sun never set—that ruling class was raised on, fed its mind on, the Greek classics.  Rulers, as we know, are not  fitted  for  their  tasks  by  the study of  science,  or even social  science.  They are seldom masters  of  the  cognitive arts, of  research  or  scholarship.   A ruler,  a governor,  a  college president is always surrounded by people who know more than he or she about almost anything.   If we knew that someone was destined to be President or Prime Minister or King,  it would be a waste of time  to  try to teach her Physics or Chemistry or even sociology or  psychology.  She  should  be  raised  on  great  literature—stories,  histories,  tragedies,  epics,  beginning especially with the Greeks.   But  this  odd  point—the  special  bearing  of  the classics  and  the  Greeks  upon the  ruling  function—is  a point  I will not linger over here—even though in a democratic age , when everyone is supposed to be educated for the ruling role, it is a point with special force.  It raises the question of the difference between discovering general principles and uncovering particular plots and the proper place of each, of science and the humanities, in our education—a question I will not pursue here.

I also put aside, as not to be taken seriously, a related point having to do with “cultural literacy” or with the certification of upper-class status evidenced by the nod of recognition upon hearing the names of classical authors—the familiar nodding acquaintance with the classics.

So why?

College education is, for most of us, the last formal or official chance to deal with the two great questions that will plague us all our lives.  Those questions are: (first) What am I supposed to do?  and (second) What is going on?  (What is it all about?)

What am I supposed to do?  is, of course the great vocational question. What am I to do with my life, what is to be my task, to what am I to devote myself, what is to be my job? This is, I think, the dominant question for all of us.  Have I a vocation, a task that will absorb my energies, develop my talents, provide me with a lifetime of satisfying and useful work so that in the end I earn the great accolade “well done thou good and faithful servant!” Are we to devote our lives to the great struggle for justice?  Are we to try to master the arts of healing, to prolong life and banish pain?  Are we to learn to turn stones into bread, transform swamps and deserts into gardens, preserve forests and animals and fresh air?  Are we to entertain or teach and enlighten?  Or are we to learn to make money so that we can do whatever we should do that money makes possible, while we make up our mind.  I need not labor the urgency of the vocational question.  It is only when we fail in this quest for vocation, when we remain or become physically or spiritually unemployed, that we must reconcile ourselves to the bitter life of the mere consumer, and one of the aims of education is to help us avoid that fate.  Each of us must endure some version of the vocational crisis, presented Biblically as a young man’s temptations in the desert.  What am I to do with my life? 

But the other question is equally urgent and basic.  If it is important to find the part you are to play, to find a part to play, to discover your vocation, it is essential to know what the game is.  Finding a role, playing a part, is playing a part in an ongoing story, and not to grasp that is to go through the motions without understanding what you are doing, to live without a sense of significance, to go through life as a sleep-walker.  The anguished quest for a glimpse of the great scenario—a quest with which religion is concerned—is a central part of the experience of everyone who is not content to remain a mere uncomprehending cog in a machine or a pathetic pleasure seeker.   “What is it all about?”, “what have I been born into?” is an unavoidable question, and it cannot or ought not be altogether evaded by the institutions of education‑–although, it must be said, they try with considerable success to evade it.

So we are looking for an answer to “what is it all about?”  That, I think, is really the same question as “what is going on?”  But “What is going on?” is a more helpful way of putting it.  It is as if each of us has been dropped by the stork on a large playing field and, as we become aware of things, we find balls flying in all directions and people running and clashing and shouting‑–all very confusing to an infant.  (When we are born, says Lear, “We cry that we have come to this great stage of fools.”  Delivered by the stork to the wrong place!)  The problem is to discover the point, to come to understand what is going on and even, after a while, to take part.  Things are going on all around us, something we are born in the midst of, something taking place now, something that may be part of a fairly long-running game.  So the great orienting question is, “What is going on?”  And the problem, of course, is to how to find out.

To begin with, we need to remember a simple fact about time, about the past and the present, as it relates to “what is going on.”  The present is sometimes thought of as a thin razor’s edge, separating a no longer existing past from a not yet existing future.  This thin conception of the present is seriously misleading.  It is a “present” in which nothing can go on.  So, as we all come to realize, the present in which we are living and in which things really do go on is not at all like a thin razor’s edge but is in fact remarkably thick or fat. The real present is not a discrete instant but a duration—a duration long enough to make sense of something going on, long enough, even, for a long story.

Consider a simple melody.  It is, I suppose, made up of single notes, each of which sounds singly and for an instant. But if we hear only a single note, if only a single note is “present” what happens to the melody?  Unless we hear, are aware of, the whole series we do not hear a melody at all.  But we do hear the melody, it is the melody that is present, and the “present” thickens to the duration necessary to contain the melody.  We are listening to and enjoying the melody present to us in the present.  How thick is the musical present?  It must vary.  Some can carry or be aware of or hold in mind a fairly simple tune. Some, not I alas, can have a movement of a concerto in mind, and listen not to notes or brief snatches but to a long movement as I can listen to or hear a simple melody.  When a musician listens, I suppose the whole symphony may be “present” to him.  There is no melody, no music, without a thick enduring present.

Or take another example.  For someone who doesn’t understand what he is seeing, who does not grasp what is going on, a tennis game may shrink to what his untrained mind merely “sees”—someone hits or serves a ball—an event that , taken by itself, is quite uninteresting. He may learn to follow the ball back and forth across the net for whole point.  The complex exchanges are seen as a unit.  He may learn to see the point as a point in a game.  If he understands more, he can have the set or the match or the tournament before him.  It is the tournament that is now going on, that is present.  And it is that that makes sense out of the tactics and strategy that are invisible to one who sees only the single shot in a thin present that has no time for these things.  Many of us recently watched the Connors match at the US Open.  Consider what you had to understand, to have in mind , in order to make it more than the dull sight of two men running around, hitting the ball back and forth.  You had to be aware of the saga of an aging ill-mannered millionaire trying to redeem himself by driving himself into competition with gifted athletes half his age, winning forgiveness for two decades of boorishness by a display of amazing persistence and courage.  Unless you were aware of something like that going on—unless that whole long story was present to you—you did not see or know what was going on.

Or consider “reading”.  You do not merely read a word or a page or a chapter. A word does not have a plot.  A book does.  You read a book.  That is the unit that has the plot or the story.  In a real sense, that is what is present, what you are going thru, what you are living through. The “present” to a reader is not like a spotlight going from word to word.  If someone asks what it is all about, what you are reading, you do not glance at the next word and say I am reading “dog”.  To read is to endure through the present story and you cannot do that if what is present shrinks to the single word.

These examples are to say that the present—what we are living in and through—a melody, a game, a story—is an enduring span.  What we are presently living through is a long story, (what Adam, in Paradise Lost, called a “long day’s dying” ) part of an even longer story, an enduring now.  And just as the musical present is thicker for one who grasps and understands music, as the game is richer for the fan who understands or grasps the series or even the wonderful present season or the story of the great series of seasons, as the book is richer for the one who grasps and is aware of more than the present page or even the present chapter—so a life is a different matter for someone who does not suffer a mere moment to moment existence but begins to grasp what is going on.

A life is, in a sense, a long present span of existence. For each of us there is a story—a childhood, youth, maturity, old age, a history.  For each there is a biography.  But this biography—and here I suppose I approach the point, is a chapter in a longer story, in the story of a family or a community or a polity or even a culture.  When we ask “what is it all about?” or “What is going on?” we are like chapters looking for our books, and it is only as we begin to see that that we begin to see the significance of what we are and what we do.  We are moments or episodes in a continuing series, in an enduring present, chapters in a longer book, moments looking for their explanatory and encompassing contexts.  How to discover that?  How to find the answer to “what is it all about” in the discovery of “what is going on” is one of the great tasks of education.

We, assembled here, are living thru an episode in the present long running story of Western Civilization—an episode in the history of this Island, of Canada, to be sure, but that is merely part of the story of Europe, and the earlier bits that go back, to the Mediterranean and, for our purposes, to the image of humans at war on the plains of Troy.

I suppose you may be objecting to all this and suspicious of where it comes out and are thinking of how to defend the view that everything is simply a collection of points and instants and all the rest is illusion, that the past is already gone, nonexistent, unreal.  But as long as you have allowed me to go this far, let me stress that our lives are to be seen as episodes in a longer story and that that story—not itself the whole story—is the story of western civilization and that the story line runs thru Greece and Rome to Europe and England to North America.  It is a complex story with plots and sub-plots, themes and sub-themes, recurring motifs, cyclical movements, and even evolutionary tendencies.  We are part of that story not because we have chosen to be but by the fact of our birth and nurture in which everything, every part of the furniture of mind and character is at least second hand, inherited, To think we have made it up is simply a parochial prejudice.  We are, above all, inheritors and most of our creativity is marginally trivial. To think otherwise is the sin of pride.  The play we are enacting did not begin when we opened our eyes. We are not the authors, we have not invented ourselves.  At most we are part of the cast of characters challenged to play a role.  If we reject this view and try to pretend we start with a clean slate, it is no wonder that we will soon complain of the meaninglessness of life and have problems of identity—just as one who thinks only the present note “exists”, who insists on hearing only that, may wonder where the melody, where all the music, has gone.

And now I ask you to tolerate another stretch of imagination.  Imagine that there is a creature, a person, called Western Humanity enduring all this time, to whom all this history is biography, is happening, a sort of not-quite-immortal being, and that each generation is really only a sort of regeneration, like the growing of a new skin.  A single complex enduring great person whose life is the enormous “present”.  And now let us imagine that about every 30 years or so this person is stricken with amnesia, all memory wiped out. So we always seem to have on our hands, as part of the culture’s regenerative process, generation after generation of total amnesiacs.  The problem for the survivors who have not yet been shed, the dying, the teaching, generation, (Yeats’ “Those dying generations—at their song…”)—the task of those who still linger, is, before they go, to restore awareness of identity.  Every new generation mutters “Where am I, who am I, what am I doing here?” as it groggily rubs its eyes and stirs into awakening.  What we can call liberal education is the art of restoring these amnesiacs to their senses.  They have to learn the language all over again, how to read and write, how to behave, and what is going on, what the game is into which they have been born with minds somehow mislaid.  How would you do it?

It is really not so far fetched.  Every generation, every person born into this continuing cultural life knows nothing of it and the process of growing and learning,  acquiring awareness of what  is  going on, may perhaps be more  crudely described as initiation than as being brought to remember.  In either case it involves being brought to grasp the story, being clued in to what is going on.  It can take a short shallow form or a longer deeper form.  The instruments of that initiation or recollection are the great moments , the great landmarks, the great clues, the high points of achievement.  The minds that have given the culture—us—its great special shape are the Homers, the Platos, the Virgils, the Dantes, the Shakespeares, the Miltons…  It all begins  for us,  at  least, for the West,  with Athens,  that  small town in Greece,  and flows in an unbroken line from then to now, from them to us in a great living present. In this great present story, it is Socrates who dies rather than give up the freedom to question and examine.  It is Athena who invents law courts to settle great moral conflicts that otherwise lead to never-ending war, teaching us to subordinate moral indignation to judicial verdict.  It is Antigone who, in the face of that, asserts that you are to follow your moral judgment when it is in conflict with the law.  It is the story of Oedipus, that great victim of child-abuse, who ends up killing his father and marrying his mother, suggesting that if the home is dangerous to the infant the infant may grow into a menace to a normal home, etc…

It is hardly an exaggeration to say that we are still working out, moving within the framework of the great dilemmas posed for us by the Greeks, we are still singing the song they struck up, acting out the roles in the story we are still enacting.  If we do not know that, we hardly know what we are up to, what it is all about.  We remain children who have not yet been let in on the whole story, our cultural memory has not yet been restored, our initiation into the game has not yet been fully completed.  And those responsible for the cure of amnesia, for the generational regeneration will have failed in their primary responsibility.  The Greek classics are like great clues left for us to decipher.  They reveal us to ourselves.

And that, ultimately, is why we should study the Greeks. Not merely because they are great works of the human mind, not because, once we get the taste for them, they give us great pleasure. But because if you are interested in your identity that is where you get a good part of the answer.  You are the present note in the prolonged existence of western civilization.  You may not like that answer and may try to reject it.  That is, you may have an identity crisis.  But it is simply an inescapable fact. You can close your mind to it, but that will not change the fact; it will merely warp your mind.

We begin with the Greeks so you will know who you are, will begin to catch on to the game into which you have been born, will recover from the amnesiac ordeal and find your part to play in that enduring on-going story.  I stress that this is really not a matter of choice.  We are a living part of a living enduring western culture.  That is a fact about us.  You did not chose it any more than you chose your mother tongue, To hate it is a form of self-hatred.  It is better to try to get to know it, to learn its movements and currents, to become familiar with its themes, and in the end to try to make the best of it.  If you are going to look for your roots, you have to go back to the Greeks, or more broadly speaking, to the Mediterranean that adds Jerusalem and Rome to Athens.

It is from some such considerations as I have been trying to convey that the real answer to why study the Greeks gets its force. If you learn the Greek themes, few things after that will seem altogether strange.  I remember a rather striking time in the 60’s in Berkeley.  Flower children, street people, drugs, strange wild music, disorder, rebellion, anti-establishment energy.  It seemed to be, claimed to be, something new under the sun.  But to Berkeley students in a program not unlike yours it was, intriguingly, a familiar reenactment of the Bacchae of Euripides, in which, under the inspiration of Dionysus, those outside the Olympian establishment hurled themselves against the cold rational world that seemed to have no room for the passions.  Just as other campus scenes were recognized by our students as a halloween reenactment of Paradise Lost.  What we are living in and thru is a great Theme and Variations.  And the theme is presented in Athens.

I suppose I should acknowledge that I am aware of the powerful challenges to this conception of education as helping us to discover what is going on and guiding us into taking part.  First, perhaps, is the indignant rejection of the idea that a new generation does not start with a clean slate, free to make of the brave new world what it wills, but is born with debts and commitments, with hand-me downs, that the note is “continuation” and not beginning from scratch, that it is encumbered by the expectation of gratitude towards its generator.  I am always amused when I read Paradise Lost by a great passage in which Satan, launching a rebellion against his creator, is reminded by another angel of his great debt to the creator who, after all, made him what he is.  Where did you get that ridiculous idea? replies Satan.  Who says we were created?  As long as I can remember, I was there.  I made myself.  My generation was ungenerated. . .  This satanic repudiation of the debt to the creator happens every year, and there are even educators who pander to it.  I’m sure you recognize this mood of rejection of what exists, of the soiled, spoiled, sin-pervaded parental world and the determination to start all over, afresh, on a new game.  (Or at least, as Electra swears, to be better than her mother…) But, alas, the world does not present you with a clean slate.

Or, discovering that our culture is a story full of oppression, unfairness, injustice, a determination to turn ones back on it and find or make another that is not oppressive of class or race or gender and that, therefore, education should steer clear of the landmarks of the oppressive culture that, if we attend to them, will warp our minds and souls.  But the path of reform or regeneration leads through, not around, the mastery of the powers of our culture—through the incarnation, not the avoidance or rejection, of those powers.

Or the objection that emphasis on a particular cultural life is an explicit or implicit repudiation of the value of other strains.  There are, of course, other great cultures into which members are awakened.  It would be stupidly provincial to deny that fact.  But a deep initiation into ones own, whatever it happens to be, is a precondition of everything.  Just as a mastery of ones mother tongue is generally a precondition of a mastery of language, and not an assertion that one’s mother tongue is superior to other languages.

There are, I am sure, a host of other objections, but, on this occasion, I will not stop to pay my respects to them…

Significance is not a cosmic but a human notion; it is not to be found by turning away from the human drama we are born into.  We are born into a world of games, of styles, of ways of life, and it may be futile to yearn for another game, as if in that game everything will be better.  I once characterized this yearning as based on the hope that if only we had a different mother tongue all the mistakes in our language would not be made—if we spoke French or Chinese instead of English.

The conception of the human person as basically a creature of his culture, not someone standing outside of it free to take it or leave it is strikingly expressed in one of the great parables in Plato’s Republic.  In what is usually called the Myth of the Metals, that I take in fact to be the real heart of the Republic, Plato develops what I think of as the conception of the marsupial birth of the human being.  We are born in two stages. When we emerge from the womb we are, of course, incomplete and unviable.  We are then placed in the second womb, the community, or polis, the marsupial or kangaroo pouch, in which the crucial stage of development takes place.  We are equipped with our language, habits, values—everything distinctively human—living a sort of limbo-like existence as minors—until we complete our growth, and emerge or are born as adults.  The community is, in this birth process, parental—and our fellow-sharers of that womb are siblings or fellow citizens who are to carry on the life of the community.  Note, it is not a mere handing on or transmission of a culture as  if one is delivering a message.  It is  a  carrying  on  of  a  community’s life, in which each is to discover and play a proper part.  Thus the art of education is the art of bringing a human being to full birth, it is an obstetric art.  In that spirit, I have suggested that reading the Greeks is part of the process of bringing to birth a person fully aware of his identity.  The myth of the metals is one of the great creation parables at the heart of western culture and a clue to what education is all about.

I would like to supplement this Greek parable with our other great Mediterranean parable of the creation or growth of a human adult. Adam and Eve are seen as a young couple living not so much in a wild garden as in a model Kingdom.  There is a ruler.  There is Law.  There are subjects living in a situation they did not create, with tasks or functions.  They are told to perform their caretaking tasks and to use their best judgment.  But there is one thing they are not to do.  They are not to presume to know about good and evil, to presume to act on their own judgment against the Law.  The story as developed by Milton has Eve considering that the law makes no sense.  Why not know about good and evil, to better serve the good and to avoid evil?  The command, to avoid the fruit of the tree of knowledge of good and evil, she thought, made no sense.  What made no sense did not deserve respect, an unreasonable law need not be obeyed, and, in a fateful moment she, with Adam following her lead, disobeyed the law, put their own judgment of good and evil above the law.  As we all know, that moment of disobedience to the command of the parental creator when that command seemed to make no sense, that constantly repeated moment,  is one of the great  crises  in human development.   It  is  the moment  in which having learned  to use one’s reason the children turn their reason to the evaluation of the law and demand that the law make sense to them if they are to obey it.  That is, having been taught to think, the pupils, the children, think about the system within which they have been raised, which has shaped their character, subject it to criticism, and, wisely or not, decide to make up their own minds about good and evil, to follow their own moral judgment.  And at that crucial point they cease being children living in the parental garden and must go out into the world beyond the nest, a world in which they will experience pain, suffering, and carry on life as they think best in a world they did not make, with what they have learned in the garden of their childhood, and must try, after the discovery, in due course, that their children are capable of murder, to recreate a rule of law all over again.

These two great creation myths lie at the beginning of the story of which we are the present chapter.  The reminder that when we are born as adults we emerge from the second womb, the community, that has restored us to our senses by giving us our culture—our minds, our characters—and that it is only by virtue of this action by the community that we are really born at all, and that we owe a filial debt of gratitude to our real parental creators.  The popular denial of these facts of life expressed  in  some  forms  of  individualism  is  an act  both of amnesia and ingratitude.

And  the  reminder  that,  nevertheless,  we must, if we are to reach childhood’s end, turn our minds, thus shaped, to a critical examination of the received law, to subject it to the ordeal of reason.

These two notes, appreciation and criticism, go hand in hand and need each other.  Appreciation without criticism perpetuates the docility of the childlike inheritor.  Criticism without appreciation will doom us to the futility of pandemonium—as Satan’s rejection of the established order resulted only in the recreation of a feeble parody of that order.  So initiation into the life of our culture requires that we do both, incarnate the powers of the culture and cultivate the habits of criticism that are themselves the habits of that culture.  Both of these begin, for us, with the Greeks, and that is why an education that does not begin with the Greeks is a bit like listening to Bach’s Goldberg variations without listening to the great opening statement of the theme of which the variations are variations.

So among all the possible answers to “why the Greeks” I would rest on the fact that The Greek Episode—the theme of a community developing the arts of inquiry and government and self-government moving tragically, almost irresistibly to its own self destruction, struggling to understand freedom and authority, law and conscience, selfishness, ambition, and selfless devotion to the common good—that this episode states with clarity and depth the great theme upon which our current life is merely a variation in the song or story of the culture within which we play out our transient turn.

Implicit in all this is perhaps a rejection of the view of Progress or at least of progress in everything.  There are, as we know, at least two realms.  There is the realm in which we seek knowledge, the world of science and of technology, in which there is clearly improvement, so that the ancients may have little or nothing to teach us about physics or biology or geology.  But that fact of Progress in the world of knowledge does not extend to something that may be quite beyond knowledge.  Knowledge is not wisdom, and it may very well be the case that the pursuit of knowledge is not the path to wisdom–that we are not in fact wiser about life , about parents and children, about individual and community, than the ancients, than the Greek and Mediterranean generators.  Technology and other trivial things change, but it may be that on fundamental matters there is no Progress, merely the playing out of variations on a theme—that on fundamental human concerns, time does not really matter.  The world of the Iliad is a world at war.  There are spears and chariots.  Our modern technology provides us with different weapons.  But no one can read the Iliad without recognizing that the human beings on the plains of ancient Troy are the same as those recently deployed nearby around the Persian Gulf. In the moral domain, the domain of wisdom, as contrasted with the domain of science and knowledge, there may indeed be no progress but simply  the  perpetual  movement  between  the demands  of the political  and  the  demands of  the domestic,  between public and private,  between,  as  in  the  case  of  the  Trojan war, the expedition and the city, the quest and the home.  To say that in the moral domain there may be no progress is not necessarily a judgment of despair; it may merely be the recognition that on fundamental matters there is not much change in the human situation—that the vices and virtues are permanent features of the human scene, that there may be a deep human nature appearing from time to time in a new wardrobe, but fundamentally unchanged. This may even suggest that a common human nature expresses itself in all cultures, western and other, and that as we come to understand our own, we can begin to understand others, but that we will never understand others if we do not understand our own. So that if we begin with the Greeks we may not only cure our own cultural amnesia but may even begin to grasp the common human basis underlying all human culture.  That is why, I suggest, that we begin with our great opening act, in the midst of things, on the plains of Troy where people are displaying all of human nature while dying in the odd quixotic quest to recapture an elusive and faithless beauty.

This essay, based on a lecture delivered at the inauguration of  the Malaspina College program at the end of Sept 1991, originally appeared in The Beleaguered College, Institute of Governmental Studies Press, 1997.

Remembering Alexander Meiklejohn

Alec and Helen Meiklejohn 

Joseph Tussman (center) with Alec and Helen Meiklejohn, Berkeley 1961. Photo by David Tussman 

I was not one of the Experimental College boys. It had shut down before I arrived at the University of Wisconsin, but I had been present, an envious intruder, at the 25th reunion in 1957. I was there as one of the later generation of Meiklejohn students, crashing the party to be in on the tribute. And now, another 25 years later, there was to be a reunion without him; a gathering of remnants, a few surviving faculty and perhaps a hundred Ex-college students, all well aged, assembled now not in the presence but in the memory of Alexander Meiklejohn. Veterans of an educational war, the thinning ranks of those who remember Alec. Beyond them, scattered, the even thinner ranks of those who were there, who could remember when, as president, Meiklejohn had stirred, scandalized, divided Amherst.

            I did not meet Meiklejohn until all that was over, until he had turned from the struggle to reshape institutions, had retired in some sort of defeat from administrative responsibility, had become famous for his unbowed  gallantry in that great lost cause—educational reform. I met him first when he returned to the university to teach for a few years as a member of the philosophy department before retiring from academic life.

            The students’ lack of institutional memory always seems to surprise the old faculty hand, but to the new student what happened the year before he arrived is merely a little known part of ancient history. So, although the Experimental College had run from 1927 to 1932 and I arrived in 1933, I had never heard of it. I had not heard of Meiklejohn either. “Meiklejohn is back!” I remember the word spreading among the older students on the fringes of whose circle I drifted. There was respect, almost awe, in voices usually stridently iconoclastic. So I signed up for an introductory philosophy course he was to teach, sensing that he was a hero, but knowing very little about it. I am struck by how little was conveyed by what gossip there was about the Ex College. It was different, some sort of educational Eden, briefly flourishing before being done in for reasons or villainies I was too innocent to grasp.

            He stepped briskly, smiling, into the classroom—lean, eager, complete. That is, he looked then very much as he looked to me for the next quarter of a century. He seemed old then, and he never seemed to age much after that; old but not feeble, evoking all the comments we make when the old don’t act their age. He was lively, cheerful, witty, concentrated, crisp. He was also, although open and friendly, very polite and, I thought, very formal. He brought with him an air of anticipation and excitement.

            We were to see Meiklejohn in the classroom, in the conventional academic setting, teaching a course offered by a department. I was not aware of the ironies of the situation. He had been, as we know, an educator—Dean of Brown, President of Amherst, Director of the Experimental College—concerned to create an environment in which teaching and learning would flourish. It is not too much to say that the discrete course, the self-contained class in a subject, the educational institution seen as a loose collection of courses, was the triumphant enemy against which he had always fought.  And here he was, at the end of his academic career, enjoying, or condemned to, the transient hospitality of the enemy camp.  He had, on equal terms, the freedom of that city. There was a truce. He was not to disturb the university’s peace; he was to teach some course in philosophy—whatever he chose, no doubt—and then, in a few years, he was to retire. I do not think he could have believed very strongly in the significance of what he was doing. But if this is true—as I am now sure that it must have been—we students had no inkling of it. He did not reminisce about the good old days. I do not remember his criticizing the educational system; he did not continue the controversy. He simply taught his courses with zest.

            It was a lively class. The mode was discussion of what we were asked to read. He did not explain anything. He smiled a lot, nodded encouragement, listened intently, enjoying it all, welcoming independence, challenging, seldom if ever allowing himself to stand before us as having an idea he was anxious to give us. His enjoyment was contagious and I remember coming into class sullen about the current shape of the universe, warming reluctantly to the discussion, almost cheerful as, at the end of the hour, we streamed down the hill in his lively wake, unwilling to let the argument end.

            What was it all about? Why did it mean so much to me? Why, especially since I did not really believe what I thought he meant to say, do I think of it as the turning point of my life?

            Wisconsin in the thirties was a progressive, politically alert state proud of its LaFollette tradition. The university was swarming with students from the East who, fleeing or exiled from the seaboard, seemed to land either at Chicago or Madison. Madison in the thirties was, with due allowance, something like Berkeley in the sixties.

            These were the early Roosevelt days, and the country was floundering in deep depression. In Europe, Hitler loomed in menace. Nevertheless, the university continued in session. The farmers were still there in the Ag School, scientists (did we know any?) were still in their labs—unperturbed worlds, alien worlds. Most of my older friends were in, or trying to get into, medical school or law school. Not for me. I was repelled by the organic intimacy of the one and frightened by the close-argued, heavy-tomed intricacy of the other. What else was there? Some were edging into the chaotic world of government and economics, but I never seemed to understand what they were doing when they did “research” (I still don’t). John Gaus was making public administration exciting to a generation of solid young men. I did not feel solid. And there was the Department of Economics. A center of intellectual energy, it was home to something called Institutional Economics. Its great figure, John R. Commons, still lived, faded, on the edge of the campus, and homage was paid to Thorstein Veblen. Younger economists wore the halo of commuters to Washington. But for me, as for many, the dominant force in the university was Selig Perlman.

            Perlman to those who encountered him in those days was an unforgettable figure. Swarthy, a nose that was a caricature of itself, a high-pitched squeaky voice, an agonizing stutter, a heavy accent enhancing the impeccable English that painfully emerged, his eyes always fixed on an invisible spot on the ceiling as he excitedly roamed the aisles, he fought adamantly against the popular Marxism of the day, fought it on its own ground with devastating effectiveness.

            I was a cradle socialist, fairly familiar with Marx, not a bolshevik, virtually represented, I suppose, by Norman Thomas. Perlman was the first adult I met who knew all about it and, incredibly, did not believe in it. I sat in his classes stunned, fascinated, destroyed, robbed bit by bit of my faith, all certainties dissolved, all direction lost.

            This is not an attempt to recreate the intellectual ferment of the thirties in a vigorous university. I mention Perlman, as I could mention others, to indicate that Meiklejohn did not appear as a solitary candle in a dim world, a lone mind in a world of clods. The scene was one of vigorous controversy about urgent issues, of powerful assertive teaching. And Meiklejohn, beaten in battle, stripped of his Experimental College, strode into an expectant classroom not quite in the center of things.

            What he offered us was the figure of Socrates. That is an interesting selection from among the possible offerings to the young facing a time of troubles. What thirst could Socrates slake? Of course the man sentenced to death by the Athenians on the charge of subversion, of misleading the young, must be with us on the side of the angels. That he refused the invitation to escape the death penalty out of respect for the law that had so unfairly condemned him, out of commitment to the city, was a troubling complication. We were being introduced to the loyal questioner when it seemed obvious that questioning was called for and loyalty was suspect—a fault not a virtue. No, Socrates was not an unflawed hero. He did go, with dignity, into that dark night. But his reasons!

            Nor was it easy to accept the Socratic profession of ignorance. It takes time to realize that life is lived in a deep fog lit fitfully by the glitter of illusions, and we thought that he must have known what he denied that he knew, and found his denial affected, insincere.

            And even the questioning! How many generations of students have thought that Thrasymachus, the unabashed realist, is merely tricked into silence, outwitted but not fairly refuted. And poor Eurthyphro, rushing off, in an early civil-rights case, to report his father for mistreating a slave, waylaid by Socrates and drawn into a diversionary argument about whether the gods love what is right because it is right or whether their loving something makes it right—about whether, as we might put it, public opinion is the measure of rightness. When the injustice is so obvious why must we be stopped to question? There is a kind of impatience with Socrates; in a mood of practical urgency we brush aside the Socratic web and rush into the pit, muttering at Socrates for trying to delay us. Question, yes—but what about action!

            So, Meiklejohn brought Socrates to class and introduced him to us. Avoid the unexamined life! Matthew Arnold, offering culture in a Socratic mood to his busy world, wryly reports the criticism: “Death, sin, cruelty stalk among us, filling their maws with innocence and youth, and me in the midst of the general tribulation, handing out my pouncet-box.” Well, I have come to love Socrates, but it may not have been love at first sight.

            Meiklejohn also brought us, newly published, his book, What Does America Mean?  To reread it after almost half a century is a bittersweet experience. Ideas long appropriated appear like forgotten old friends, evoking a flood of sharp images, the passions, the doubts, the agonies of youth. To remember Alexander Meiklejohn is to re-examine oneself, to painfully recall the ways of self-defeat, to retrace journeys, to feel the mind begin to stir again over questions never answered but somehow put aside.

            What Does American Mean?, deeply characteristic as it is, has a special quality of intimacy almost unique among his writings. This is due, I think, to its being written for his college students. It is not condescending, but it has that special unguarded quality of working classroom discourse; it is a teacher’s working revelation. It has an air of vulnerability, and now, as then, I feel protective about it. I don’t think I want everyone to read it; I would recommend it to only a few of my friends. I do not want to hear what Callicles has to say about it, nor the scoffing of Thrasymachus. I shrink from the burden of defending it, although it is all true:

            We are spirits as well as bodies; we have obligations and commitments as well as interests and desires; significance is more than satisfaction; excellence is more than happiness. The pervasive human tragedy is the self-defeat in which the higher is confused with, betrayed to, the lower…Every attempt I make to describe the “doctrine” strikes me as a hopeless caricature. It is, I think, the most personally revealing of Meiklejohn’s books, the unshaken base from which he, all his life, conducted his sorties against the materialism he detested.

            How did we take all this? In a way, I think, that seemed a part of Meiklejohn’s special fate. Generally, we loved where he came out, but we could not accept or understand the philosophy that led him there. Spirit? Whatever is, is solid. We were, on the whole, confident materialists. We felt in our bones that interests were real and obligations snares for the unwary, part of a pernicious ideological superstructure. Happiness made sense. Excellence? Quite all right if it contributed to happiness; but preferable even if it did not?

            We overlooked his philosophical oddities because he warmed us by his criticism of the society we lived in, of exploitation, by his scornful rejection of the marketplace as the center of life, of selling as a human transaction, of the competitive success that destroys us. He seemed a prophet crying in the marketplace. He thrilled with sympathy for the notion of a people undertaking, together, to plan for justice and for beauty, with scorn for the idea that each should simply seek his own good. He laughed—and it comforted us—at the idea that the business of America was business. Oddly enough, I cannot remember ever really thinking of him as a socialist or, if the thought crossed my mind, taking it seriously as having anything to do with the core of Alexander Meiklejohn. The issue was deeper; he was an idealist.

            As I have said, we found the talk about spirit and obligation to be unreal, something to be treated with polite skepticism. (Perhaps I should not speak of “we” so casually. There were some young Meiklejohnians who eagerly adopted the language of “spirit,” but I thought, I must confess, that they simply didn’t understand anything. Sometimes “we” shrinks to “I.” ) But there was another point about which we fought with impolite vigor. What Does America Mean? The very title was an irritation. What do you mean, “What Does America Mean?”  A country doesn’t mean anything! It is just spread out there, sometimes where it shouldn’t be. It has just grown. Individuals have interests and purposes, Americans have them like everyone else. But “America” is a seething mass of individuals, special interests, classes; it has no special purpose of its own, no unifying, transcending common purpose uniting its diverse members. Years later a classmate unmet for decades shouted in greeting across a room, “common purpose!” and it all came flooding back—the excited class, Meiklejohn smiling, nodding, not arguing as we raged against political piety and superstition. Common purpose, indeed!

            It was a great stumbling block. Individuals and their interests seemed real enough. Radical enlightenment expressed itself in the view that classes were even more real. It did not seem strange to assert that Jones was a member of the working class whether he realized it or not. It was a fact about him. He needed to be brought to see that he had interests he was not aware of, that he shared a common class destiny, a common purpose—something given by the situation, not chosen, to which he could, if morally asleep, be oblivious, but which, if he awoke, gave his life significance…It is obvious that the movement from individual to class consciousness has something in common with a movement from individual to community consciousness. But the awareness of class conflict, of class war, made it difficult to assume a unity that transcended the class struggle. Classes seemed real; the broader “community” did not.

            I should say, rather, that the broader community seemed real enough to Meiklejohn but not to some of his devoted followers. And it made him an easy target. Who, after all, was talking about the political community, the state, those days? Not the phalanx of left intellectuals; not the individualistic liberal. Fascists? Simple-minded patriots? Oblivious of his bedfellows, Alexander Meiklejohn? Obligations, duty, the general will—clearly the language of the enemy. And we could not cure our Master of his habit of using that language. We had to try to defend him. He seemed to be attracted to dangerous ideas.

            Looking back, it seems to me that I was not converted to anything by What Does America Mean? When I remember being shaken it is by Perlman and his persistent argument. To lose one’s socialist faith is one thing; to embrace idealism was quite another matter, and I was not prepared to do that. But the special quality of the situation was that we loved Meiklejohn, loved the quality of his human sympathy and social perception, the hard integrity of his mind and wit, but viewed with suspicion, even embarrassment, the idealistic philosophy he seemed to believe in. I believed in Meiklejohn, but not in what Meiklejohn believed in.

            So it is odd to reread it after all these years. The philosophy which seemed to me then to be so unsubstantial now seems to have the irresistible weight of common sense. The argument does not seem new; it is somehow heavier, right. But to my chagrin, the fervor of my accord with the criticism of the marketplace and all that seems to have abated a bit. How can the student, the disciple of Alexander Meiklejohn, explain the depressing drift into reasonableness of the editorial page of the Wall Street Journal? The god that failed, to be sure, but is that really enough? And why, unlike years ago, when I read, “We must take the social order in our hands and set it right,” do I shy like a frightened horse or shudder like Burke presented with an interesting new proposal? I will, of course, postpone any attempt to answer these depressingly interesting questions…

            So there was the figure of Socrates, and there was What Does America Mean? There was also a faint expectation of philosophical conflict in the air. Max Carl Otto was a popular figure in the university—an exponent of a variety of pragmatism, a follower of James and Dewey. Since Meiklejohn was an idealist the stage was set for a Wisconsin variation on Harvard’s earlier James-Royce encounter. It did not quite come off. I had always liked William James and had, therefore, tried to study Dewey and was disposed to be a pragmatist. I was also prepared to like Otto, who was iconoclastic about the gods and derisive about Kant and that sort of thing. But Otto tended, I thought, to ridicule what he disagreed with and was quite unfair in his characterization of Meiklejohn. At any rate, there was no great feast of argument between them. Still, I tried to show Meiklejohn that he and Dewey really agreed about everything. He smiled and shook his head; I didn’t convince him.

            In my last year at the University of Wisconsin, I was still at loose ends, and Meiklejohn suggested that I take up graduate work in philosophy. That, he said, meant either Harvard or Berkeley. Since his home was now in Berkeley, I did not consider Harvard and one fall day in 1937 I trudged up the hill to the house, about a half-mile from the Berkeley campus, that was to be Meiklejohn’s home for the next quarter of a century.

            I was rescued from the miseries of life as a graduate student in philosophy by World War II, but not before I had endured some four years of it, living most of the time within a few blocks of the Meiklejohn home on La Loma. His chief activity during that period was in connection with the San Francisco School of Social Studies, which he founded and struggled to maintain. It was a venture in adult education, something very close to Meiklejohn’s heart. I was drawn in to the edge of the work of the school, asked to teach several classes. There was a lively faculty group. Meiklejohn took a hand, Helen (Mrs. Meiklejohn) was enthusiastically engaged, John Powell from the Experimental College faculty acted as director (was director I should say, but John never seemed quite like a director to me), and there was the sparkling team of Hogan and Cohen. And several others I never got to know very well. John Powell has written interesting accounts of the school. I did not witness the throes of its creation, nor was I there when, during the war, it closed its doors.

            It was an attempt to create an institution within which a persistent effort to develop the political and social understanding necessary for the life of a democratic citizen could be sustained. It was not concerned with credit towards a degree, nor with vocational retooling or promotion nor even with cultivating the enjoyment of leisure. It was to be a place for adults to study their common concerns as members of the polis. “School,” I suppose, is the inevitable name, although it is a name laden with doom. But we seem to have no better name for what, in any case, we hardly have at all. Still, there is a not unreasonable dream that adult members of a democratic community will, as a normal part of their lives, read and gather to discuss materials out of which a common understanding will grow, a school that need never come to an end, a habit from which there is no graduation, a community made by taking thought together…Amherst, Wisconsin, San Francisco—a story with the same golden thread. Not, this time, on a green New England college campus, not in an enclave by the shore of a lake on a middle-western university campus, but in a part of an office building in downtown San Francisco or in the rooms of a Santa Rosa Junior College in their deserted evenings.

            My memory of those days, little as I trust it, is of a Meiklejohn slightly withdrawn from the battle. He was in it, and his presence was clearly indispensable, but he was more like Moses raising his arms over the battlefield than like the commander in the field. A bit remote; a good part of his mind otherwise occupied. That, at least, is my impression; although I’m sure it understates the degree of his devotion to the school, to adult education.

            He was also engaged with the American Civil Liberties Union. I had not yet awakened to the delights of constitutional law and I wondered why he was so interested in something as unimportant as civil liberties. The Northern California branch was in a running dispute with the national office about the inclusion of Communists on the ACLU board—something like that. Meiklejohn, of course, argued against the exclusion of Communists—as he was also to argue brilliantly against the exclusion of Communists from university faculties. But apart from this particular controversy, his concern with civil liberties and with the ACLU was deep and persistent. And it had that odd quality I have already mentioned. Civil libertarians hailed him as their champion; few have matched his enlightened passion about the First Amendment, his defense of freedom of speech. But, of course, while they loved where he came out they did not, generally, understand or agree with how he got there. Most thought of civil liberties as belonging to individuals “against” the state. Meiklejohn thought of them as the powers of citizens implied by their public function (a point I hope to make clearer when I speak of Free Speech). Puzzled by his reasons, enthusiastic about his conclusions. Even, perhaps, forgiven his reasons for the sake of his conclusions…

            Eventually, there were consequences. Meiklejohn always insisted on the crucial distinction between the powers or civil liberties of the citizen as ruler and the civil rights of the citizen as subject. He considered the ACLU as primarily dedicated to the former, to the protection of the integrity of the mind of the sovereign people. In the end, if I may pass lightly over intervening years, he became increasingly unhappy with the tendency of the ACLU to move away from his conception of its proper role. Finally, he withdrew from the ACLU. He withdrew quietly. He did not, he told me, want to resign with a public statement of disagreement. But, with disappointment and regret, he left the organization he had cared for for so many years. In the sixties I had actually joined the ACLU and was, for a time, on the board of the Berkeley branch. I left in a huff, in disgust, over what I thought was the ACLU’s utter failure to understand academic freedom and its stupid tolerance of disorder on the campus. I felt quite Meiklejohnian, but I must stop short of tarring him by association.

            So, in those pre-war years Meiklejohn was busy with the San Francisco School of Social Studies, with the ACLU, and with what seemed to me to be a booming social life. He and Helen had many friends on the faculty and in the area, and visitors from the East were always dropping in. Lunches, teas, dinners, social evenings seemed to besiege the carefully protected mornings in the study. The study in which, at that time, he was writing Education Between Two Worlds.

            Education Between Two Worlds is a sustained, impassioned attack on the competitive individualism which he calls “protestant capitalism,” or when he warms up, “Anglo-Saxon protestant capitalism.” Not exactly, for Meiklejohn, a newly discovered villain. In fact, from start to finish, grappling with life as a teacher, what seems to unmask itself everywhere as the enemy is, on the one hand, the adamant assertion of the private or “selfish” interest—however enlightened the self—as the proper aim of all action, and the companion view that on the intellectual plane the mind was to rest content with “the way it seems to me” as the final view of things, polished with the politeness of a tolerance for the regrettably different views of other minds. We each have desires; we each have opinions; and if we have good manners we can live with the unavoidable conflicts without the futile struggle to impose a common “good” on the teeming world of desire, or a common “truth” on the mad and blind world of opinion. Some such view, encountered everywhere, was, to Meiklejohn, the denial of the possibility of human fellowship and human sanity, a rejection of Jesus and Socrates.

            The book has a dramatic form. Something has broken down and we need to rebuild, but we stand baffled amidst the rubble. Surprising studies of key figures trace the story: Comenius, the frustrated hero of the old religious order; Locke, the destructive compromiser; Matthew Arnold, the yearning victim; Rousseau, the incoherent prophet of a new order. The individual studies are gems in themselves, fresh, perceptive, controversial interpretations; made, as they are put together, to carry the story line. The story is really simple, stark, central. The old religious order with God, and the conception of men as his children, a human family under a single moral law—that conception of the world is shattered, gone, not really believed in. And even when not explicitly renounced, we have learned to put religion to one side, to separate it from the prudential world, banish it to a private realm. And, generally, the church has been replaced by the state as the central public institution. The public school, under the aegis of the state, has become our chief teacher. Can it, how can it, what can it, teach? The intellectual basis of the old order is gone; we are left with the competitive individualism of an essentially warring world, fundamentally inadequate; we seem not to have developed the understanding that would do for the state what religion had done for the church…We are, as Arnold mourned, between two worlds—one dead, the other powerless to be born…

            I must pause over the relation of Meiklejohn to religion. It must have seemed to me that anyone who was “idealistic,” who spoke of duty and obligation, of brotherhood, of unity, was religious. Meiklejohn even looked a bit clerical; he had bishops and rabbis among his friends; he spoke lovingly of the culture of Burns and the Bible; he wrote tenderly of Comenius. And yet…he was not a churchgoer, he was not pious, he was not devout. He did not believe the religious story in the terms in which it was told, and he did not pretend he did, or act as if he did, or ever use a religious prop to support an argument, or ever wrap anything in religious mystery. Risking all sorts of misunderstanding I will assert that in all the years I knew him he was absolutely unreligious. Unreligious. “Atheist” does not describe it, since what we usually think of as atheism is merely a form of fundamentalism; and to deny the literal truth of a parable is as misguided as to affirm it. His position, made explicit in Education Between Two Worlds, is that some deep intuitions were once expressed in religious form and language, but the religious form now no longer serves them; that these still-valid insights need to find adequate expression—“political” (in a proper sense) rather than religious.

            The difficulty with this analysis is that it is both undeniable and unpalatable. Religion has become for us an essentially private matter; church and state have become “separate;” and the state, moving into the space left by the shrinking church, has become  the instrument through which we seek the public good. Through which, especially, we seek to educate. At the same time, we can hardly be said to have a view of the “state” which would lead us to trust it with the care and nurture of the soul. We would need to think better of the state. But that “thinking better of the state” seemed almost to be the distinctive mark of the enemy—of the authoritarians or totalitarians against whose exaltation of the state we were being driven to a defiant affirmation of “individualism.” Meiklejohn seemed all too willing to think sympathetically about the state, about government which, more deeply understood, might be made a worthy servant of the community’s aspirations. But he was swimming against the tide. The liberal mind found “pluralism” more safely congenial; conservatives, when not drifting into a libertarian folly, wanted, at least, to shrink the public sector. Government, however much we depended upon it, was in disrepute; education, therefore, in serious disarray.

            I read parts of Education Between Two Worlds in manuscript, but I was not able, at that time, to get a sense of the work as a whole. I agreed with the formulation of the problem. A long section in which Meiklejohn dealt with Dewey seemed to me to be right, but to be too polemical and somehow not very satisfactory. I found it hard to disagree with what Meiklejohn offered as the way out, but I also felt unmoved by it, and a bit let down. I suppose I expected, as a friend of mine said, that he would pull a rabbit out of the hat, and I didn’t see a rabbit—although I don’t suppose I would have recognized one if it had been—as perhaps it was—produced. But I was a bit preoccupied with exams and the approaching war. I was an isolationist in those days, reluctant, as we were saying, to pull the chestnuts of the British Empire out of the fire, horrified by Hitler, appalled by the power of the Axis, raised on the futility of war, the injustice of Versailles—a stranger to the culture of guns. Meiklejohn was not an isolationist, but I cannot remember arguing with him. When, some months before Pearl Harbor, I was drafted, Meiklejohn said only, “You will not want to miss the formative experience of your generation.” I was startled by the unexpected remark, but it seemed to make sense; and in any case, Pearl Harbor overrode doubts.

            Sometime in 1942, just out of OCS, I had a few days in the New York area and went to Annapolis for a day to visit Meiklejohn who was spending some time as a friendly observer at St. John’s College. It was the only glimpse I had of him on a small, eastern college campus, and I don’t think I ever saw him happier and more at home. It was also the first glimpse I ever had of that lovely world. Meiklejohn belonged there, as, I suppose, he did not belong in Madison, as he did not belong in San Francisco, as he really did not belong in Berkeley. When the roll is called, it will be Meiklejohn of Amherst. It was that day, I think, that he told me of his visit to Woodrow Wilson, still in the White House. “Ah Meiklejohn,” sitting up in his sickbed the Scot president of Princeton greeted the Scot president of Amherst, “When I get out of here we must start a college together!”

            That day at St. John’s remains in memory. Green, quiet, sunny. Meiklejohn smiling, loitering at the tennis courts, alert at the back of a classroom, jesting with Scott Buchanan about something Scott was brooding over. A Meiklejohn absorbed, springy-stepped, happy, in a world he knew to the core and loved.

            After the war I returned to Berkeley and soon settled into teaching at the university. Meiklejohn was there on La Loma. The San Francisco School of Social Studies was gone. Education Between Two Worlds had been quietly received by the world. And Meiklejohn was launching his career as expounder of the meaning of the First Amendment. Berkeley had lost its bucolic air and seemed quite in the center of things. We were excited about big issues—the United Nations, the bomb, hopes for peace and a new order, growing tensions with Russia as the cold war developed, Senator McCarthy and the hunt for subversives…Against this background, the great dramatic episode for the university, and for Meiklejohn, was the faculty loyalty oath fight of 1949-1950.

            It was a bitter, heartbreaking fight, and in spite of some ultimate judicial triumph and the vindication and recall of the non-signers, in spite, even, of the amazing persistence of a determined but divided faculty, the feeling I now have as I try to recapture the memories of those days is a deep sense of defeat—a defeat which tormented me even then and from which I have probably never really recovered.

            Faculty members were required to sign a statement disavowing membership in (or belief in the principles of) the Communist party, as a condition of continuing employment. There were, of course, all sorts of issues, motives, pressures, questions, variations in formulation, but I think a crude formulation which ignores vanished contextual subtleties will serve best:  Should a Communist be allowed to teach in the university?

            There was a simple common sense view that Communists did not believe in democracy, would destroy it if they could, and that it made no sense to give them the chance to undermine the democratic educational system. (How do you convince the man in the street that you should hire your enemy to corrupt your children?) At the level of academic, not “man in the street,” or the “people out in the state” common sense, the view was that an acknowledged Communist had a disciplined commitment to a dogma as interpreted by the party, expressed as the party-line, and was not, therefore, committed to the free pursuit of truth, did not have the open-mindedness essential to the community of scholars. The regents, in requiring the disclaimer, were standing on popular ground. Nevertheless, the faculty found itself drawn into a bitter prolonged fight.

            In a simpler world, one of our respected colleagues would have simply announced, truly, that he was a member of the Party, that he believed its program was best for America, that he wanted to continue his scholarly work and teaching, that he could not take the oath without lying and he wouldn’t do that, and that he didn’t see why he should be fired. Alas, no one stepped forward to give us a concrete case to fight about. We were not to know, made it a point of honor not to try to find out, if there were real live Communist party members on the faculty. The matter was to be fought out on “principle.”

            But what principle? Many, if not most, thought that a dedicated Communist was as unfit to teach as a dedicated Fascist, as (a bit sotto voce) a dedicated Catholic (the pope, infallibility and all that…), as in fact a “dedicated” anything—if the dedication was to anything but the unbiased pursuit of truth. And in any case the faculty in its “practical” mood (an amusing madness that sometimes seizes it) did not think a defense of Communists would sell in the provinces. So that the basic question, involving the difficulties of the relation between commitment and truth or between passion and cognition, was avoided so far as was possible. Instead, we retreated to such things as: party membership involves guilt by association; actions are punishable, not beliefs; oaths are silly and don’t work because liars take them routinely; singling out a group like professors was discriminatory and insulting; the regents had no business meddling with hiring and firing which were governed by faculty procedures; “academic freedom” was being violated; and so on. Early in the struggle, tragically, I thought, the faculty—the Academic Senate—abandoning the heartland, formally endorsed the regent’s anti-Communist  policy—while the fight continued on a variety of the other grounds.

            Meiklejohn had published the clearest defense of the position of the Communist teacher and scholar in the university world. If the person was otherwise qualified, the fact that he believed in communism and joined the party was an exercise of judgment and a matter of intellectual freedom—not a ground for disqualification. Beyond the question of “rights,” he also argued the educational advisability of having convinced communists in the educational institution. And, of course, he was highly sensitive to the procedural matters that lie at the heart of academic freedom. It was a brilliant argument, and I agreed (and still agree) with it completely.

            During the long struggle, Meiklejohn felt himself to be in a delicate position. He cared passionately about the issue and thought it was the most significant crisis of the modern American university. He had close friends on the faculty, yet he was not a member of the university; it was his battle, but he could not take a direct part in it. The leader of the embattled faculty, of those who fought against the oath requirement, was Edward Tolman; shy, courageous, sensitive, intelligent, a man of utter integrity. Tolman lived on La Loma also, his home just across the way from Meiklejohn’s. They were old friends. Tolman was a scientist, a psychologist, and not a constitutional or political theorist. He did not formulate the issues or elaborate defenses of abstract positions. He acted out of a sense of responsibility for more vulnerable colleagues, out of an instinct for freedom and decency as well as a conviction that there was something improper about the demand for an oath, for a disclaimer of belief. Close as he was to Meiklejohn, I do not think he found his ideas, his theories, congenial. Nor did many who were taking part in the fight. Meiklejohn was, after all, the spokesman for the “absolutist” position, the defender of the right of Communists to teach. That was a position that, as I said, was abandoned by the Academic Senate which, after endorsing the regents anti-Communist policy, could no longer continue the fight on those grounds. But the play ran on, without Hamlet, for a bitter year—the grounds of opposition constantly narrowing as the faculty, involved heavily in negotiation, lost one piece of ground after another.

            Meiklejohn, uninvolved in the day to day struggle, not a party to deals and concessions, had no need to change his position or to drop the main issue. He continued to follow the battle closely. I, and others, dropped in frequently to discuss the situation with him. He was especially concerned about the position of the left-wing junior faculty members, not by this time, it must be said, the center of the faculty’s general concern. Meiklejohn seldom asked me to do anything, but once, as I was going off to some meeting to plan the next move, he asked me to raise a question about the general indifference to the fate of a young “radical.” I intended, because he asked me, to do so. But I found that when the moment came I shrank from doing it. I was being practical that week, and I could not bring myself to do something so quixotic. I still remember the bitter taste of failing to do one of the few things Meiklejohn ever asked me to do.

            So, throughout the oath fight, Meiklejohn was on the sidelines, never inciting others to fight, sympathetic, clear headed and at times, I think, almost heartsick. Perhaps not heartsick; he was used to being with a losing minority and never seemed to lose his verve. And always, he was surprisingly realistic. Above all, in all the turmoil he never seemed to lose sight of the fundamental issues, never seemed to lose his appreciation of the quality of human action. Had he been on the faculty, I cannot imagine him signing the oath.

            More than a decade later, during the student unrest, the so-called free speech movement, Meiklejohn was still on the sidelines, living a bit more quietly on La Loma. He was, of course, deeply interested in what was going on, and many of the student leaders found their way to his home. And I would drop in frequently while he was lingering over the paper and breakfast and bring him up to date with what I thought was going on. (I was moving steadily from young turk to old guard.) He was, by this time the grand old man of free speech. And it was assumed that he would be in sympathy with a movement that unfurled the flag of free speech—a movement of students in a university which was large and impersonal and generally criticized as being indifferent to the educational fate of its undergraduates. Meiklejohn liked students and he listened to them and understood them. But while in the faculty oath fight he had provided active intellectual support for the position opposed to that of the regents, on the “free speech movement” he was, I believe, publicly silent. His position, as I remember it: in so far as students were objecting to a fragmented undergraduate education, he agreed that much of undergraduate education was a shambles. He did not think that students knew how to remedy the situation and did not think that student participation in the running of the institution made any sense. This may perhaps surprise some who misunderstood his deep sympathy for students. But educational matters were, in fact, a very small part of the student revolt in any case.

            As for the so-called free speech issue—the right of students to pursue their politics on campus—Meiklejohn’s position was clear. Students had no “right,” not even a First Amendment right, to engage in political activity on campus. “The issue,” he once said to me, “should not be put in terms of rights. It is entirely one of educational policy. If in the judgment of the university authorities it is conducive to the educational purposes of the university to permit political activity, then it should permit it; if not, not.” I do not think that, as an administrator, he would have compromised on this fundamental point. So, while sympathetic to the students, he did not agree with their position. On the other hand, it would have been very difficult for him to come to the aid of an administration whose exercise of its authority he regarded as an educational disgrace. So, he listened to everyone, nodded, asked gentle questions, did not argue, did not incite, did not make public declarations. In private, he was as close to bitterness as I had ever seen him.

            When we think of his first 70 years, it is Meiklejohn the educator. For the two decades after that he is, of course, Meiklejohn of the First Amendment. Even earlier he had been interested in law and the Constitution. I remember being surprised, at Wisconsin, by his defense of the Supreme Court against Roosevelt’s attack; a packed auditorium, Meiklejohn cheerfully taking the unpopular side. A single sentence of his floats intact out of memory’s haze: “It was a greater mind than Justice Holmes’ that said ‘Only the Permanent changes!'” Meiklejohn had been close to Walton Hamilton and Malcolm Sharp, two great teachers of the Constitution, and had seen the fertility of the Constitution and law as teaching material. And, of course, there was his long involvement with the ACLU. But it was with the publication of Free Speech in 1948 that he emerged as a great interpreter of the First Amendment.

            The First Amendment is, as every lawyer knows, a complicated and treacherous swamp—a simple statement overlaid by a thick and perplexing gloss. A modern landmark was the work of Oliver W. Holmes, Jr., who had mitigated the apparently unqualified character of “no law abridging the freedom of speech” by adding, to put it crudely, “except when there is a clear and present danger” of something or other…How clear, how present, how great a danger of what to what are questions that have engaged a great deal of legal ingenuity. The upshot is that there is to be no abridging of the freedom of speech except, of course, to avoid  “danger,” and history and practice have worked out details. It has been worked out in such a way, moreover, that, on the whole, we do not complain of too little freedom of speech. To take “no law” as actually meaning “no law” is an affront to our practical sense, hopelessly “absolute;” and Holmes and clear and present danger have set it right.        

            Meiklejohn, if I may dare to oversimplify, did two things. He narrowed the scope of the First Amendment by reading “freedom of speech” to mean the freedom of political speech; and, thus narrowed, gave it a preferred position among forms of communication. So that government may not abridge the freedom of political discussion, even on the ground that the government thinks the discussion is dangerous; “non-political” speech—commercial speech, for example—does not enjoy that degree of protection and may be, as are other activities, governed by “due process of law.” In short, one kind of speech is given more protection and the other kinds of speech are given less protection.

            This interpretation—narrowing and deepening the First Amendment’s protection—is supported by a rather surprising move. I remember the day I first heard it. Professor Jacobus tenBroek and I were sitting in Meiklejohn’s study while he read to us an argument against the power of congressional committees to probe the political beliefs of citizens. As the issue had been put, there were said to be three branches of government, each with its necessary powers, and, posed against these, the private individual with his desire to express himself and his desire for privacy—a private desire which must give way to public necessity. But, Meiklejohn pointed out, there are really four branches of government—the fourth branch being the electorate, a branch of government with special functions to perform and with powers that must be protected if that function is to be properly performed. Each citizen is a member of the electorate and in that capacity has the powers of a public office, quite apart from his private interests and rights. The First Amendment, Meiklejohn argued, should be read not as referring to the private right of expression, but as a statement of the powers of the electorate and the assurance that these powers—assembly, speech, publication—are not to be interfered with by another, an inferior, branch of the government. The amendment is the fundamental guarantee of the political power of the people, acting as the electorate, a power so fundamental as to be properly taken as, relatively, absolute. To thus relate the meaning of the First Amendment to the theory of self-government through the fourth branch is, I think, a stunning stroke of genius.

           Free Speech, reissued with some added papers as Political Freedom, is probably the presently most readable of Meiklejohn’s books. And it has some characteristic Meiklejohnian features. It is both crisply written and full of passion—as usual; it is also, as usual, immoderate and defiantly iconoclastic; it is an analysis, but it is also an attack. It is an attack on a great popular hero and a related attack on the great popular principle for which the hero is honored as creator—on Justice Holmes and on “clear and present danger.”

            Holmes is surely one of the most popular of American legal giants, and to attack him is to ask for trouble. But Meiklejohn finds the combination of hard-headed “realism” or cynicism eked out by sheer sentimentality an example of the quality of mind that marks the failure of education. So with some preliminary gestures of respect he launches a powerful attack on the mind of Holmes. But, of course, the worshipers of Holmes will simply set their dented idol back on its feet and continue the idolatry. And, as I have indicated, he rejects the position that the meaning of the First Amendment is adequately expressed by the view that there is a personal right (a natural right?) of free expression limited only by the need to avert a clear and present danger. But his own position involves a rejection of competitive individualism and a troublesome view of the state that seems both innocent and dangerous. Clear and present danger seems good enough to many warriors in the civil liberties struggle. To reject it is impractical, but it is nice to have someone around (like Meiklejohn) to take an “absolutist” position that, dangerous or not, freedom of speech should never be abridged. So, without being understood, Meiklejohn was hailed as the great First Amendment absolutist—stirring, but, as a philosopher, naturally a bit idealistic…

            Meiklejohn’s central preoccupation during the last decades of his life was with the First Amendment and with the claims upon his time and attention flowing from his stature as the defender of the absolute right of freedom of speech. In this connection I must speak of a month-long session held at the Center for the Study of Democratic Institutions in Santa Barbara. The Center had been established and was presided over by Robert M. Hutchins who, after a brief and stormy time with the Ford Foundation went off to create a serious non-academic intellectual center. A month, one summer, was to be devoted entirely to a consideration of the First Amendment and, in addition to the resident members of the Center, a number of others were invited—Meiklejohn, of course, and Harry Kalven of the University of Chicago Law school, and I, among them.

            It was an interesting ramble through some of the thickets in that field of constitutional law. Meiklejohn’s ideas had been published, so they were part of the background, familiar to us. We went through a number of papers without unusual enthusiasm. I took up four days with the first presentation I ever made of what, some years later, was to be developed and published as Government and the Mind. Kalven, full of wit and knowledge, was, I think, the star of the show. Lots of indecisive meandering, mostly enjoyable. Meiklejohn sat there quiet and attentive, saying very little but, as usual, everyone seemed to be speaking primarily to him, vying to impress him. His ideas, as I said, were familiar and, on this occasion, not being presented as having to be argued for. I remember only a growing sense of regret that the negative form of the First Amendment might obscure the possible responsibility of government for cultivating and enhancing the life of the public mind. But on the whole, the conference left matters about where they had been. What else should one expect?

            What was memorable to me about that month with Meiklejohn at Santa Barbara was not the free speech discussion; it was education. Assembled there, almost accidentally, were some of the leading figures in the modern history of American higher education. There was Hutchins of the University of Chicago; there were Scott Buchanan and Stringfellow Barr of St. John’s; there was Meiklejohn of Amherst and Wisconsin. (The missing voice was Dewey’s, perhaps, but Dewey [or his followers in the stronghold of the Teacher’s College] had never really hurled himself against the formidable institutional structure of the American college or university.) Hutchins was an impressive figure. He presided with intelligence, grace, courtesy, and beyond presiding, he had a mind of his own. He worked hard, still getting to his desk by five or so every morning, winding up a fair day’s work by 10AM. He had lived at the center of conversation and argument for years, had heard everything, read widely, listened patiently, assimilating what came to him to a strong structure of convictions. Striking, courtly, formidable. He was, as I said, trying to create a new kind of intellectual center, beyond the gates of the university. He was devoted to the attempt, but the Center did not, I think, live up to his hopes. Still, it was a gallant attempt and Hutchins seemed relatively untouched by the bitter infighting that swirled around him—small stuff, no doubt, compared with what he had endured in his attempts to launch and protect his college at the University of Chicago. What little I knew of the Chicago enterprise I did not find terribly congenial. I did not like its metaphysical basis, and I had resented Hutchins’ articulate visibility. He had, as spokesman for reform in higher education, stood in the place I thought should have been Meiklejohn’s. I thought the Experimental College was a better idea than the Chicago plan, that Meiklejohn had a deeper mind than Hutchins’, that Hutchins had a better public-relations flair…But all that was long ago. Both shared a common experience of defeat in the educational wars, and the relation between the two of them was warm and friendly. It was a delight to see them at the same long table.

            And there was Scott Buchanan. He was one of the permanent members of the Center and, to my mind, one of the most powerful and interesting influences there—learned, broodingly thoughtful, a ranging irreverent imagination, a patient gentle wit. His presence was, I thought, comforting and reassuring to Hutchins. Scott had been an undergraduate at Amherst during Meiklejohn’s presidency and Meiklejohn always remembered that when he was “fired” from the presidency, Scott had said to him, “You have been Socrates; now it is time for you to be Plato.” Buchanan sketches part of his own intellectual history in the introduction to Poetry and Mathematics, but I don’t remember him telling the story of St. John’s in detail. Meiklejohn was close to St. John’s as he was never close to Chicago, although he had reservations about the conception of liberal education as defined by the trivium and the quadrivium. Still, St. John’s was the embodiment of a serious conception, and Meiklejohn enjoyed his times in friendly residence there. Buchanan and Barr had left St. John’s some time before this meeting in Santa Barbara. Scott mentioned wryly that when he wrote the description of the program in the first catalogue he thought it would be changed every year; he had not expected it to become a bible. Buchanan was close to Meiklejohn and close to Hutchins and, as we sat there, the only one of the three whose institutional efforts still had a concrete expression. The Experimental College was gone; Chicago had dismantled the college after Hutchins had left; but St. John’s, abandoned by Buchanan and Barr, had survived, still living, I think, on Scott’s inspiration. But they were all retired from the struggle, assembled now to discuss the problems of intellectual freedom under the First Amendment.

            In those two post-war decades Meiklejohn lived, as I have said, on the edge of the campus, with many faculty friends but nevertheless at arm’s length from the university. He often appeared on campus. Every Friday a group of a dozen or so faculty from different departments met for lunch in a room in the faculty club. Meiklejohn was a member of this group and often on Friday I would walk down the hill with him to the club. It was a relaxed, loud, jesting lunch—the pursuit of truth adjourned for the moment—seldom serious, gossipy, full of confident self-assertion. Meiklejohn sat there, well- liked, one of the group, joining in laughter but seldom evoking it, quiet and observant, not in the least interfering with the unabashed display of unharnessed faculty wit. But I would wonder what he thought of it all, coming down from his study in anticipation of intellectual fellowship, a foray into the world that was essentially hostile to all he believed in about college education—hard-headed successful scholars, teachers of disciplines, rugged individualists of the mind, personally very friendly, but professionally hostile to almost everything Meiklejohn as an educator had always stood for—not only hostile toward but triumphant over…Meiklejohn the teacher in the midst of the successful professoriate. It could really only be a luncheon truce although the group, I think, was unaware of how deeply he was at odds with them; they were professors; he was an educational reformer—enemies by nature. Strangely, of all the years of talk I remember only one exchange: “Alec, its amazing that at ninety you can polish off that strawberry ice cream? How do you do it?” “Oh,” came the reply, “I’ve always followed a rule: anything I want, but never a second helping.”

            I had long been restless about the nature of lower-division education and when I returned to teach at Berkeley in 1963 I began an effort to establish a version of the Experimental College on the Berkeley campus. The program did not go into operation until 1965 and Meiklejohn was not there to see it, but I had discussed my plan with him before he died. It was, as I said, based on what I understood, or perhaps fantasized, of the Wisconsin experiment; but I was hesitant to discuss it in detail with Meiklejohn or to involve him in it in any way. I felt that there was something presumptuous in my trying to do what he had done, and I did not want to ask his approval or involve him in criticism. But I told him about it and about the progress of the enterprise as it made its way through the obstacle course of committee approval. He listened patiently. He made no suggestions; I don’t think he even asked any questions. He was, now that I think of it, utterly unexcited by the prospect. I was,  and I knew it, and he knew it, no Alexander Meiklejohn. He could have stopped me with a word, but he did not utter it, so I went ahead stubbornly. At one point he asked, firmly, that I not call it the Experimental College. But by that time there was a budget line for an Experimental Collegiate Program (not a title of my choosing) and the program became known as the Experimental Program—close enough to make me uneasy. It did not occur to me then, but he must have had deep misgivings. It does occur to me now, but I am far from regretting the adventure—part of which I have recounted in Experiment at Berkeley and A Venture in Educational Reform . Nothing remains of it at Berkeley, but oddly enough there is a program at the University of Wisconsin that claims lineal descent from the original Experimental College.

            Meiklejohn continued the pattern of life he had established until the end. The intensity of his social life abated; his daily walks in the hills were a bit less brisk; he lingered a bit longer over his breakfast coffee, chatting leisurely with me when I dropped in to see him on my way to the campus, less anxious to get to his study to write. He told me one day, a bit upset, that he was having trouble writing—that he seemed to be writing in circles. He allowed me to dissuade him from publishing a short review that I thought was uncharacteristically personal in its polemics. And then, one day, neatly, without fuss, he took a deep breath and died.

            How can I describe the special quality of Meiklejohn’s presence? Beyond the crisp alertness, the sense that everything was being enjoyed, that every moment was a special occasion, beyond the flashing wit, the friendly invitation to combat, the unpretentious formality, the encouraging smile that seemed to tempt everyone into putting his best foot forward or to live for a while on tiptoe. Most deeply I think it was a matter of awareness, a consciousness of significance, the sense that the world contained more things than one ordinarily supposed. Meiklejohn seemed to see more. Some of his responses—a smile of appreciation, a quick flare of indignation—came unexpectedly, so that you became aware, at least, that you were missing something. I remember an experience I had in a plane while a movie was being shown on a screen. I watched idly, not wearing the headset for sound. I saw lips moving, arms waving; it was strange and dull. But once in a while those around me burst into laughter, and I realized that I was missing something that was going on before my uncomprehending eyes. They were aware of something that was there that made the situation comic; I was blind, or deaf, to it. Meiklejohn always seemed to be tuned in to a richer world—one in which more things were going on than met the ordinary casual eye. In his contagious presence you became aware of stories, plots, dramas you would not have noticed on your own and which, when you left his presence, seemed to fade out of mind, persisting only like the memory of a dream. As his student, evoking the aid of his memory, I do not find myself asking, “what would Meiklejohn say,” but rather, “what am I missing that Meiklejohn would see?”

            As for “what would Meiklejohn say?” I must say something about how he said things. In a classroom, or in the midst of a group engaged in discussion, he said, in fact, very little. It is not that he would not take part in the exchange; but what he said always had the quality of an intervention. A question, a quick short sentence. I cannot remember him making anything like a sustained argument, or pressing a point, or loosing a barrage of words. His interventions were often startling and would send the discussion off on a fresh tack or recall it from a diversion; but they were brief, friendly, good humored, often witty. The unit of discourse was, for him, the single short sentence set off by an encouraging nod, a smile, or even a defiant thrust of the chin.

            But his speeches were quite another matter. I heard him speak in public many times, but I do not remember him ever speaking extemporaneously. He read what he had to say; it was prepared in advance. He read well, as such things went, but he did not make it up as he went along. And it was quite a different Meiklejohn. I was often shocked by it; it did not seem in character. Or rather it was, since he spoke often enough, another side of his character. He was nervous beforehand and he began calmly enough, but soon his voice rang out, he almost shouted—sometimes did—and there was very little diffidence at the heart of the argument. Full of fervor, full even of denunciation, hurling gauntlets all over the place. I hasten to add that this was not always the case. There were short graceful speeches, often of a ceremonial type, done gently and elegantly, also written out. But I mostly remember the Meiklejohn fighting speech, and it was far removed from the conversational Meiklejohn. He was well received, enjoyed, admired in his oratorical role, but it was the side of him I liked least. I was uneasy, I think, at the change in the familiar voice, the almost strident insistence of tone. Perhaps I was simply unfamiliar with the vanishing tradition of oratory. Still…Once, after one of his longer speeches, I remarked that I thought that it had ended anticlimactically, that I had noticed this about several of his speeches—a considerable letdown toward the end. “Of course,” he said, “of course. I have an obligation to return the listener to the condition in which I found him.” A characteristically surprising remark evoking the image of Meiklejohn taking his passengers on a wild roller-coaster ride, tapering off at the end, smiling and straightening their ties as they file out, handing their destinies, like transfers, back into their own hands.

            Remembering Alexander Meiklejohn! It is unlikely that there will be a seventy-fifth reunion of the Experimental College. Soon enough there will be no one left who will remember the lifting of the spirits at the sight of his spare figure briskly entering the room. I am filled with a regret that he would laugh at. Have I not heard of mortality? Are we to be concerned about the persistence of fame? The accidents upon which that rests? Meiklejohn was a great man and, I admit, I do not want him to join the anonymous ranks of forgotten great men. Every generation must have them—men who stand out among their contemporaries by virtue of character, integrity, intelligence, vitality, who leave a deep mark on those whose lives they have touched and then are known no more, who have not left a permanent monument behind to reinvoke their presence. They are the fresh incarnations of the great human archetypes, as Alexander Meiklejohn was a great incarnation of the type of which Socrates was also an instance—the teacher who seems never to be off duty.

“Remembering Alexander Meiklejohn” originally appeared in Liberal Educator (Winter 1984) and was reprinted in The Burden of Office, Talonbooks (1989)