Report to the Regents
I was holding forth to a class of seniors, developing an eccentric interpretation of King Lear, when I noticed something strange happening to a student in the front row. She was gazing at me, listening intently, but something was slowly oozing from her mouth, puffing out into a globe that obscured half her face. As I stared, faltering, the globe collapsed, the debris captured by a delicately questing tongue, the fixed gaze unbroken. “Bubble gum,” I realized belatedly, as I tried to pick up the thread of thought, “just bubble gum.” But I remember thinking, “It’s time to get out!”
The fate that had delivered me to the brink of retirement had timed it well. The juxtaposition of Lear and bubble gum was a hint of things I would not have to get used to. In any case, it was not a choice; I was in the last cohort that had to retire at 67 and I considered myself lucky not to have the option to continue to 70 or even beyond that, if anti-age-discrimination zealots were to have their misguided way, until the sway of tenure gave way to the judgement of the Senility Board.
So I became Emeritus. It was a gentle passage. A student, graduating, is thrust into a colder world, into a radically different way of life. The Professor, retiring, may merely shed the responsibilities of teaching, drift slowly toward the quieter edges of the academic community, and continue the pattern of his life undisturbed, except, perhaps, by the disappearance of all excuses. If I were a scientist and had to relinquish scarce laboratory space the break would be more drastic, but Philosophy is one of those disciplines that one can pursue with little equipment, although “doing philosophy”—a parochial expression I’ve always detested—may require having someone to do it to. But whatever one did in class, I had done enough of it; had, at the end, found it such a strain that after I stopped teaching I marvelled that I had ever been able to do it at all.
And life continued. No more department meetings, of course, and senate meetings only when they promised amusement. I kept the habit of lunching at the Faculty Club. It was, in some ways, a spiritual center of the campus, a place of gossip, of trial-balloons, of old-boy network transactions, of duels of wit, of committee meetings, of depressing “organ recitals,” a place to meet friends without appointment. It is a faculty establishment counter-weight to Sproul Plaza and Telegraph Avenue, and although most of the faculty shun it, I liked it and felt at home there.
Strolling out through a sunny student-strewn Faculty Glade after a leisurely lunch I pause before the great gnarled buckeye that clings defiantly to life. Some wit had once painted “Let me die” across its torso, but it had outlasted the paint. A friend had dubbed it Yggdrasill, and it reminded me of the contorted Laocoon; but it was only a sort of emeritus tree, in no hurry to depart.
No more classes, but the mind did not seem to come to a complete stop. Now there was time to devote to the unfinished business of a long professorial life—the essay abandoned after a striking opening paragraph, projects regretfully put aside for more urgent work, lines pursued but not brought to a triumphant close, neglected fugitive inspirations. I was shocked to discover, as I sifted through old notes, how often the great new idea that seemed to have just popped into my head a few days ago had, according to the evidence lying irrefutably before me, also popped into my head fifteen years ago.
There was, of course, no need to do anything, but the feeling that you are not doing something you are supposed to do persists long after you have become aware of the fact that you are no longer supposed to do it. Even the familiar nightmare persists—I am hurrying down the hall in a panic to meet the class I have forgotten to go to for several months. How could I have forgotten! Would they still be there waiting! But I never reach the door…
I still feel I am supposed to be doing the work my classes were supposed to be interfering with. Doing nothing, becoming a “terminal consumer” without the dignity of function, is waiting around to die. So I keep working or loitering around my word-processor, polishing off unfinished business. In a gesture to years of teaching a course in Philosophy of Law I produced a clear and simple explanation—too long for an article, too short for a book, too reasonable for zealots—of the apparent conflict between The Rule of Law and Judicial Activism. I did it without referring to this or that legal case, or using legal jargon, or quoting from or discussing the views of current jurisprudential figures and, of course, law reviews politely declined to publish it.
I also wrote an informal account of the experimental program I was involved with in the 60`s that seems to have relieved me of the need to do something heavier—and unwanted—on the dismal state of what we used to call liberal education. I published a short and rather unsatisfying essay on the Religion of the Marketplace. And I keep nibbling away at some other unconsummated inspirations.
And something keeps haunting me. I seem to think that I have a great undischarged debt to the University, that I owe it, or the Regents who represent it, or someone, an accounting. No one has asked for it. No one is aware that they want it. And if anyone wanted it and asked for it, they would almost surely not get it. And yet… I suffer from the fear of being ungrateful.
It is hard to understand the uniquely favoured life of an American university professor. Professors are not really “hired;” they are not simply employed to teach classes and write books and do research. They are, if and when they get tenure, admitted to permanent membership in a special community, something like a secular clerical order, a community supported by the tithes of the broader community that needs, resents, and mistrusts it. Essentially, the University is a stronghold of the mind. Professors are commissioned to think and to encourage others to think, to develop “thinking” as a way of life. They have to do many things, but if they don’t spend their lives in thought they are impostors. How a life is spent in thought is a long story, a story easy to misunderstand since it is full of so many things that don’t look like thinking. How little like Rodin’s statue starting up from time to time to shout “Eureka!” Saying that the state hires people to do research and to teach the young does not quite get it right. It supports an intellectual community and its way of life—a strangely alien way of life..
Each professorial journey of the mind leaves its own paper trail. Annual reports list classes taught, papers delivered, grants applied for and received, manuscripts in progress, books published, committee assignments—all the signs that mark the trail of each assault on the Mt. Everest of the mind, or the less ambitious stroll toward the peak of some academic foothill up which, youthful dreams faded, we dutifully plod unless, alas, we move off the trail for a long picnic. But the record is, however detailed, strangely unrevealing of the professorial inner life, of the turmoil of the life of the mind, of the failures to achieve fruition, of what being a professor is really like. What, I wonder, do our governors, who have seldom been professors, or even professional intellectuals, know about the way of life of those over whose destiny they supposedly reign? If glimpsed at all, half-hidden behind an administrative screen, the faculty must seem a vain, self-centered, impractical, and fundamentally ungrateful lot (now sprinkled with info and bio-millionaires)—an impression not likely to be corrected upon closer inspection. A closer inspection that, should anyone attempt it, would probably be greeted with alarm as a threat to academic freedom!
And, if you wanted to, how would you find out? I used to read every academic novel I could find, hoping for revealing insight into university life. Often funny, but not, in the end, terribly rewarding. Seldom a glimpse, beyond parody, of what teaching is really like; comic portrayals of struggles for prestige and the illusion of power; some fair detective stories with professorial sleuths and criminals; P.C. scandals; and, for want of anything more original, sex—variations on the archetypal Arthurian tale of Merlin and Vivian, the magician-pedant-sage enamoured of an apprentice enchantress—a graduate student—who exploits him, gets her degree, and leaves him entombed, enchanted or disenchanted, in a tree—a metaphor, as is now fashionable to say, for the library stacks—brooding over his foolishly unanticipated desertion. I have found the literature of academia more likely to mislead than instruct an interested outsider.
Nevertheless, I am not going to try to write the missing novel, although I would if I could. Nor is this a substitute for the oral history that some of my colleagues, major players of recent years, have contributed to the university archives. Nor is this a belated apologia pro vita sua since I don’t feel that guilty. It is the sort of report I might like to read, from a long-time faculty member—unsystematic, desultory, reflective—without an urgent message, without—or even perhaps with—a last frantic grasping of the world’s indifferent lapel.
You cannot understand the university unless you understand tenure, but I do not propose to explain it, clarify its complexities, or even to defend it, although I certainly believe in it. The academic community maintains itself by the selective co-option of candidates and that process has been subjected to withering criticism: it is elitist, perpetuating its own biases, Euro-centered, fearful of a deep critique of class, gender, and cultural privilege, parading its arbitrary subjectivity as somehow transcendently objective, docilely serving the power-structure, fearful of life-giving new insight—and so on. I will not parade the defense against such generally silly charges. Tenure, with all its faults—and its virtues overwhelm its faults—is essential to the integrity of the academic world. Without it, a great community, however flawed, degenerates into a mere collection of employees and entrepreneurs—and temporary ones at that.
Tenure, once achieved, frees one from the immediate control of the external community. It is difficult for a governor or a legislator or an irate citizen to reach in to fire or punish a tenured faculty member. A professor can do many ridiculous things with impunity and that freedom from external sanction is the most obvious fruit of tenure. In a business culture a non-business way of life—homeowners who have never met a payroll— is strangely nourished and tolerated.
Important as that is, there are other, less appreciated, features of the tenure system. The real point of tenure is that it protects you from your colleagues and it makes possible a long-range conception of intellectual life untroubled by the urgencies of immediate fruitfulness.
Take me, for example. I have a PhD in philosophy from Berkeley and after teaching there for half a dozen years left under ambiguous circumstances. After less than a decade of teaching in the East (and having finally published a book) I returned to Berkeley as a full professor with tenure in the Philosophy department. Apart from teaching Introduction to Philosophy I generally taught courses in political and legal philosophy. Neither in substance nor in style was I, nor did I consider myself, a real philosopher anywhere near the difficult center of the philosophic or departmental enterprise.
There is a difference between “philosophy” and “philosophy of…” “Philosophy,” philosophy pure and simple (having, in the academic world lost its suggestion of hard-won wisdom about life), concerns itself with some traditional problems like “free will,” or the problem—if it is a problem—of the external world or how we can know it or really know anything, for that matter. Or the reducibility of the mind to the brain. Or something about language or meaning. I will not produce a list of the authentic philosophical problems, but you get the idea. Philosophy deals with them, with impressive cleverness.
But the “philosophy of…” is a different matter. It presupposes an immersion in something other than philosophy—in law or politics, or art, or science—and deals with problems that emerge in the course of those activities.
This meant that after an apprenticeship studying Plato and Spinoza and Hobbes and Hume and Kant, etc., I spent most of my time pursuing studies in law and politics, not worrying—although I had put in time gnawing at those tasty bones—about the traditional philosophical problems. Nor did I fall into the swing of philosophic fashion. I overheard rather than participated in the arguments that swirled around Logical Positivism, or about the English school—G. E. Moore, or Wittgenstein, etc. I sampled but could not—and cannot now— stand the pretentious cloudy pomposity of the continental philosophers—Heidegger and all that—and wrote them off with distaste—as well as the later spate of fashionable Frenchmen.
I do not intend to try to justify this indifference to the main stream of academic philosophical activity or the consequence that I was of very little use to graduate students pursuing PhDs and jobs in the field and who had little time for things like law and politics that were, at best, marginal to professional academic philosophy. I drifted almost entirely into undergraduate teaching: introductory philosophy courses, dealing with some great works at a minimally technical level and, in the “philosophy of” area, developing fairly unique courses in law and politics more useful for students in political science or law or students generally than for prospective graduate students in philosophy. I did not argue for the inclusion of the legal or political field into the core of graduate philosophy education for many reasons, including my belief that if it were included I would have to work with graduate philosophy students and would be forced to deal with the way in which the philosophy profession dealt with the field—a way that seemed to me to be more or less worthless.
As a result, I taught very large numbers of undergraduates and virtually no graduate students. I was happy with the arrangement, although it left me outside the main concern of the department. And I sometimes had the feeling that, from the point of view of the department, I was dead weight. Or, at best, performing a service that was departmentally necessary but uninteresting professionally. Everyone was very polite and I felt no pressure but I was not very involved in producing philosophers and that, on the whole, was what the department cared about, what it thought it was supposed to do as a department in a graduate-school-oriented research university.
The point is: I was safe, secure, untouchable. I had tenure. I filled a slot that could have been filled by someone more involved with what was going on in the profession. The world was full of them—bright philosophers more than willing to come to Berkeley with the word from the East or from England or even Europe. If, instead of tenure, I had to be rehired every few years by a decision of the department the chances are that I would not have been around very long or, alternatively, I would have had to do—had I indeed been able to—what I did not think worth doing and get into the going game. But I went my own way, politely tolerated by the department that could do nothing about me. Even when I served several short stints as chairman I was essentially an outsider. I don’t think I was regarded as a real philosopher by the department. But, as I said, I didn’t have to worry about it. However one might assess the value of my life as a professor, I was able to live it only because, in my opinion, I was shielded by tenure from the pressure of departmental judgment. For better or for worse. The subtle point of tenure is that it frees you from pressure by colleagues. Academic politics, as every insider knows, would be intolerable without the tenure truce. Whether the academic community is better off or more socially useful because of this arrangement is hard to prove, although I do not hesitate to say so. At any rate, I could not have lived the life I lived without it. Tenure allowed me to pursue my own path in the areas of education and politics and law unconcerned about the judgment of departmental colleagues. Or of anyone else. And to do it without concern for short-term results.
A Hospitable Refuge
But perhaps I should say something about my relation to philosophy, or the academic philosophy that has for so long provided me with a hospitable refuge and from which I feel ungratefully estranged.
I remember, as a teenager, reading the family Everyman edition of Spinoza’s Ethics while riding the streetcars of Milwaukee. Why? To what avail? When much later I took a course in Spinoza, it felt strangely familiar. As an undergraduate at the University of Wisconsin I took several philosophy courses— a standard “Introduction” that swept through Locke, Berkeley, and Hume, that I found very exciting, and something about Pragmatism that introduced me to William James, who I liked, and John Dewey who I found hard to like. I remember lugging Dewey’s Art as Experience—a heavy book—around the campus, some sort of badge I suppose, but I don’t remember what it says. It is still on my shelf, still unmastered. And, importantly, a class from Meiklejohn in which I finally encountered Socrates and became a lifelong Meiklejohnian. On his advice I left Madison, in 1937, and went to Berkeley to pursue graduate work in philosophy.
I found myself quite out of my depth. Although Madison was miles ahead of Berkeley in political sophistication, in philosophy Madison was quite parochial and not, I think, in Berkeley’s league. I had not even had an undergraduate major in philosophy and had lots of catching up to do.
There were three groups in my philosophical world. First, the faculty—seven intelligent men, a fairly close-knit group socially, engaging in civilized controversy. The remorseless graduate student mind was inclined to be harsh in judgment of some of the faculty, but I respected most of them. They had taken pride in the fact that any member could teach any course offered by the department although recently that pride had been humbled by the admission that modern logic, after Principia Mathematica, needed expert handling and could be taught by only one member.
Second there was a small but powerful group of senior graduate students. In those pre-war depression years there were few academic jobs, and the department allowed students to stay on far beyond the relatively short time needed to win the Ph.D. These were, as I saw them, seasoned philosophers, more like faculty than student, but in constant contact with rawer graduate students. They were advanced in knowledge, deeply involved in the issues pervading the philosophical world—especially England—formidable in argument, purveyors of advice, and possessed of technical lore I saw little hope of achieving.
And then, my peer group, some very bright, some amiable drifters who, sooner or later, dropped out. Philosophy was not one of the “hard” disciplines, but the more hard-headed among us—not including me—put their energies to work at logic, doing Principia proofs and scorning aesthetics, ethics, metaphysics and all such foggy subjects. Those not in the logic elite seemed hapless by comparison. We all served as teaching assistants and ambled towards our prelims after which, successfully negotiated, only a dissertation remained before we entered the slim job market.
I drifted toward political and social subjects, without notable enthusiasm, and can’t remember any of the philosophy I studied. I do remember that in the year before I took my exams I was interested in two things—Dostoyevsky and skiing. And in political argument. Among my graduate student peers were a few Marxists or, as it later turned out, actual communists. They were part of the cluster of bright students surrounding Oppenheimer. I had left Wisconsin cured of my feeble Marxist tendencies and enjoyed a running battle with them. Since they were very bright I could claim few victories, although I remember gloating as they disappeared for a few days to develop a response to the Hitler-Stalin pact that I and an equally non-communist friend had, by a stroke of luck, predicted.
I took my exams, squeezed through, and early in 1941, some six months before Pearl Harbor, was drafted into the army for what was supposed to be a year’s stint. I had never held a gun in my life and was a private in the 7th Division stationed at Fort Ord when Pearl Harbor was attacked. Shortly, I was sent to Officer Candidate School at Fort Benning, became a second lieutenant and was sent to Texas to train troops. I can remember standing on a platform lecturing a battalion on the use of the machine gun—the only machine I’ve ever understood. My request to transfer to the Ski Troops was denied and one day, without any initiative on my part, I was plucked out of Texas and dropped into Berkeley for a 6-month intensive course in Chinese. When I once asked why I had been chosen I was told that it was because I was bilingual—that is, I, who am not very good at languages, had passed my French and German exams in graduate school!
I was sent to China and spent the rest of the war there, generally able to operate without an interpreter, involved in the effort to train Chinese troops and re-open the Burma Road. I ended as a Major doing intelligence work and was saved from a possibly perilous assignment by the Hiroshima bomb, after which I turned down the chance to move to Shanghai and returned to Berkeley, to the wife I had married before I went to China and to the Philosophy Department. With army-inspired discipline I polished off my dissertation, got my PhD and was launched on a teaching career.
One of the senior philosophy graduate students I have mentioned had joined and was out to transform what was then called the Speech Department. It offered a freshman course that was an alternative to the English Department’s course in Reading and Composition and it stressed the analysis of argument and writing. Although I was uneasy about being part of a mere “speech” department, I accepted an invitation to join the maverick group of philosophers, historians, political scientists and psychologists running a lively enterprise teaching supreme court decisions and Platonic dialogues to undergraduates. I enjoyed it, and there I met the powerful blind constitutional law scholar—Jacobus tenBroek—with whom I worked for a year writing an analysis of Equal Protection. And then I moved to the Philosophy Department.
But I still feel that I didn’t really belong there. Logicians went their own way with superb confidence, without my comprehension or cooperation. Logical Positivists were fighting a rear-guard action without my help. Continental philosophy seemed foggy, pretentious, and hopeless, and the department had little or no use for it. The English were riding high, but their emissaries impressed me as clever, acute, articulate, and formidable—but not important or, to me, interesting. So I plodded along, working on political philosophy, law, and even education—out of the mainstream. I left Berkeley to teach in the East for about eight years and returned to the philosophy department until I retired in 1982.
So here I am, without classes to teach, without an active role in an organization, with scraps of unfinished work that really need not be finished, with a variety of regrets and guilts, with knees that limit mobility, with assorted cracks in a physical facade, with lapses of memory, with undelivered messages that no one is waiting for. Nothing that I have to do, but still with the sense that there are some things that I ought to do. It is hard to imagine what life would be like without the feeling that there are things that I ought to do. In fact, the prospect of living without “obligations” is devastating. A vacation is a time in which a role and its obligations are to be laid aside for a bit. The horror of retirement is that the role is to be stepped out of for good. We know the sense of relief with which, after a vacation, we resume the burdens of the role. But after retirement? The essence of being a human adult, of having dignity, is to have obligations. Obligations are role-connected. Thus, retirement threatens us with de-humanization . Which is why, I suppose, we resist giving up, relinquishing the professorial role.
The University is helpful. A Professor one remains, although ambiguously “emeritus.” Retirement reveals its layers. You doff the mantle of Teacher; you may still play a part on committees. And your research, your writing, can go on forever. Your can keep on doing your work, your “real” work, with which your other duties may have interfered. Two decades after formal retirement I still come down to my office to commune with my word-processor.
But there is a private drama or tragedy or comedy over which we tend to draw a protective curtain—the spectacle of the mind in disarray, losing its power, its grip on the art of sustained coherence. I do not mean the increasing difficulty in remembering names, nor remembering what one is about to say, easily derailed if even momentarily diverted. These, I suppose are common enough aspects of aging and while annoying are not terribly serious. More serious is the sense that one’s span of attention is shrinking. Or perhaps the span of “control”…
I seem to remember that, in my flourishing days, I could speak extemporaneously on something I had been thinking about—whether for a half-hour or an hour—with hardly the use of a note and with the feeling that I had a sense of the whole so that every part was under control, played its subordinate part in the composition—not too little, not too much. The part was dominated by its place in the whole, did not get out of hand, did not try to go into business for itself, accepted the curb without a struggle. One thing led to another easily and appropriately. I think of it, perhaps ineptly, as like the relation of the melody to the note, as if having the melody in mind determined the order and quality of the next note. But now it is as if I cannot keep the whole thing—the theme—in mind; the parts are consequently ungoverned; instead of one thing growing out of another, things are merely added on without organic connection. The range of my control seems to shrink—from a book to an essay to a few paragraphs. I grow less discursively coherent and become more aphoristic and heavily repetitive, dogmatic, assertive. As if I can control a sentence but not much more beyond that. I have, I think, lost my working sense of form. And, with great disappointment and pain, I have finally given up on my great Milton-Hobbes project.
The Milton-Hobbes Project
Ages ago, I wrote my doctoral thesis on The Political Theory of Thomas Hobbes. I still remember the relief and joy with which, the day after I passed my final oral exam, I carted loads of books back to the library with the feeling that I could now think about other things. I had spent a good part of a year engaged in benign interpretation, protecting the creator of Leviathan against the usual hostile treatment doled out to a defender of strong sovereignty, a treatment not entirely due to the mood of a world that had suffered the ordeal of the war against totalitarian dictatorship. As one might expect, I found that he had been misinterpreted; that, if you read him carefully, you could see that he didn’t mean one thing or another, that he was quite a decent fellow; that, while he had some streaks of crankiness he had a grasp of something really important. I had grown quite fond of him, as had John Aubrey in whose Brief Lives Hobbes has a starring role. Still, it was with relief that I put him aside.
“Put him aside” does not, of course, tell the whole story. How much, in how many ways, he had infected my mind I could not tell then, nor even now. Clearly, the problems of authority and subordination that he posed remained like a work-out machine in the gymnasium of the mind for the exercises of a lifetime. But subtle influences must have been at work undetected. In those days I was a bit radical. On the side of Parliament against the King, seeing Pym and Hampden as heroic fighters for freedom against Stuart tyranny. Only later did I discover my taste for executive as against legislative power and replace Pym and Hampden with Wentworth and Cromwell. I do not blame all this on Hobbes, but he may have played a part in my movement—a temporary movement now shifting into reverse—from Young Turk to Old Guard.
In any case, Hobbes, no longer a preoccupation, remained on the scene. I resisted the advice of my departmental seniors to turn the Hobbes thesis into a book, in aid of the quest for tenure. I did not think it was good enough as it stood and I did not want to invest more time wallowing in a work that had served me well but that would have been a diversion from my growing interest in law. And I think I was reluctant to have my life shaped by “accident.” I remember leaving the army with more command of Chinese than of any other language save English and considering pushing into studies of classical Chinese and Chinese philosophy—as the post-war generation of Japanese experts had done with their wartime acquired knowledge of Japanese. I was tempted, but I decided not to let my life be shaped by the accident of an Army assignment—little realizing that life is shaped almost entirely by such accidents. I felt the same way about Hobbes, who happened to be around to write a thesis about. I was looking for a subject I could do with dispatch—a great work I could comment on, in the field of political philosophy, by a significant but somewhat neglected figure. I punched in the specifications and out popped Hobbes and Leviathan. But much as I liked and appreciated Hobbes I did not want to make a career as a Hobbesist. Hobbes scholarship has become a minor industry since those days and I do not regret not being a part of the boom. I value Hobbes entirely for his political writings, and while he is full of wit and has a great capacity to turn a phrase, in the end he has one great idea—peace requires universal submission to a common authority, the difficult acceptance of subordination, the restraining of pride. This is a theme especially appropriate in the teaching of rebellious youth and I made heavy use of him over many years. He remained in the background of my mind. I used him when appropriate but I did no further “work” on him.
About Paradise Lost. I had often tried to read it, but I bogged down—as do most people who try—after the first two books that scintillated with the great debates in Hell. But in the late 60’s it became a central text in the educational program I was involved with. In that context I worked my way through it and, to my surprise, over the following years, fell in love with it. It was, perhaps, a belated enchantment. Paradise Lost was not in fashion. T. S Eliot, an undeniably significant poet but otherwise something of a fool, had launched a telling attack on Milton that I had heard of but ignored. I kept reading Paradise Lost.
It must have been in the early or middle 80’s that it dawned on me that I was involved with two great 17th century works, an epic and a treatise, by two great figures for whom the Civil War was a crucial experience; both preoccupied with authority and rebellion—one essentially secular, the other significantly religious. I cannot remember the crucial moment in which I was overwhelmed by the idea that I was to do a comparative study of Leviathan and Paradise Lost.
It is odd that such a study was, and is, still to be done. For over three centuries Hobbes and Milton have loomed as landmarks of Western civilization. Contemporaries, Englishmen, both famous in their lifetimes, who seem, so far as I can determine, never to have met. Their intellectual interaction, before the age of footnotes, is undocumented, unrecorded. To this day, serious comparative studies are virtually non-existent. Perhaps because students and lovers of poetry are not much interested in political theory and do not grapple with Leviathan, and students of politics and political theory have little time for Paradise Lost. In any case, a major comparative study had yet to be done, and I was overwhelmed by the desire to do it.
What I intended, or think I intended, was a comparative study of Leviathan and Paradise Lost, not of Hobbes and Milton; of two great works, not of two interesting authors. And if I were pressed to say what I meant by “comparative,” I don’t think I could come up with a very good answer. How do you compare an epic and a treatise? Compare the ideas, ignoring the poetry? What did they think about rebellion? An almost guaranteed descent into pedantry. What was the point of doing that? Unfortunately, I did not think this through. Here were two great works; I had a deep interest in each; and I would, somehow, write about them together.
Alas! I hate to admit it, but I think it was, in spite of its attractiveness, a bad idea. For something like a decade and a half I struggled with it, intermittently. Sometimes, in spite of my desire to deal with the two works as self-contained masterpieces, I would immerse myself in the world of 17th Century England, or I would read widely in the literature about Milton and try to read Milton’s prose works—works that I found, I confess, almost unreadable. I defy anyone to enjoy Milton’s prose. But I read and re-read and re-read Paradise Lost. I re-read Leviathan, of course, although on Hobbes I relied on my memory of my graduate-school Hobbes period.
I had some fun playing detective. Had they ever met? I ran down every clue I could think of with no luck. Could I prove that Milton had read some Hobbes? No, although at some points the Miltonic language has a Hobbesian ring. Could Hobbes, loving poetry and translating the Odyssey and Iliad into English in his old age, have not managed to read Paradise Lost? Not a clue! Pursuing such matters I almost felt like a scholar and enjoyed the illusion.
And, of course, I wrote. Mostly about Paradise Lost, tacking on a bit about Leviathan, almost as an afterthought. First one organizing principle and then another, riding one inspiration after another. Pages, drafts, piled up—several hundred pages. Some not bad, lots uninspired. It did not add up; it did not hang together. I would give up and then, months later, glance at some of it and think it not bad and worth going on with or trying again. And then, in a few months, abandoning it with a sigh of relief mingled with guilt. Until now. I can’t do it and will stop trying!
Or at least I think so. I have put, altogether, years of work on the project, an enormous investment of time and energy. Giving up on it is not easy. It is to acknowledge a dismal failure. It is not that I have so little time. It is that the very idea that seems still to be so glowingly significant as I consider it abstractly, begins to fade as I work on it. And now it dawns on me that it may not be the idea that fades but I who flag, fall short, can’t think it through clearly or, if I see some light, can’t seem to write it out. What I write or have written seems, when I look at it, flat, shallow, uninspired, and I refuse to, I will not, write a dull book about Paradise Lost. But it is hard to quit. As I turn my back on the project I feel sad and depressed. But at the thought of plunging once more into the task my heart sinks. If I try again I will never leave off; it will be the last thing I ever do and it is not that good an idea and not what I want to die struggling with. I leave it as a great unwritten book.
But even as I say this I feel a familiar stir of excitement, a surge of interest, a flick of determination to do it after all. But I have felt all that before and have been lured, more than once, back into the inviting labyrinth. No! No! No!
Can’t I Get Over the 60s?
It is now 1996, but I am often reminded of the fact—or at least the charge—that I am a case of arrested development, have not gotten over the traumas of the 1960’s. It is probably true, and in my defensive moments, a bit grandiosely, I imagine someone saying to Edmund Burke, “Surely, Mr. Burke, isn’t it time for you to get over the French Revolution?”
I grant the disproportion—I do not confuse myself with Burke; the student uprising of the 60’s did not treat us to a reign of terror, although we veterans recall a whiff of tear gas and the occupation of the campus by police and the National Guard. Lots of turmoil, Jacobins at the microphone, the interruption of some habitual rituals of academic life and, beyond the campus, some political consequences—like the presidency of Nixon instead of that of Hubert Humphrey—but on the whole the system rolled with the punches without suffering cataclysmic change.
And even on campus, scars healed. Student energy directed at educational reform exhausted itself in the tendency to abolish “requirements” in the name of freedom until it gathered itself to push for a new class of requirements—like ethnic or gender studies—pushing the claims of the margin against the center. But these innovations, while making some claims on budgets and, along with affirmative action (not really a 60’s phenomenon), bringing some under-represented groups into academic life, still left the heart of the traditional academy largely unimpaired. Or at least, left traditional academic practice an option for those who sought it. With, perhaps, some grumbling about reduced budgets, more red-tape, and (sotto voce) more people around—ignorable but still irritating to purists—who didn’t really belong there. But even in the largely lily-white days I remember my faculty elders saying that they found it worthwhile addressing themselves only to the top ten percent of the barbaric or philistine student body (what were those blond crew-cut be-sweatered be-lettered clods doing in an institution of higher learning!) and I now found myself maliciously consoling unhappy academic colleagues that it should make no difference to them what a large part of the student body, that they ignored anyway, looked or sounded like.
So, with the end of Viet Nam (and can we number those who have never gotten over that?) and the end of the cold war enough had changed in the world to reduce the memory or experience of the student revolt of the 60’s—our current students were not even born then!—to a small blip on the receding historical horizon. Even among the old Faculty Club veterans with whom I regularly consort the memory of those days is tinted with the wry humor that accompanies the sight of shrinking mountains. Even the bad old days are now the good old days; the pain is gone. That was more than a quarter of a century ago. Forget it!
So what does it mean to say that I have never gotten over the 60’s? And especially since, now that I think of it, the charge is fundamentally correct. Not in the sense that I think about it all the time, or that I talk about it a lot, but that it has colored or shaped my awareness to a degree that astonishes me. How? About what?
To begin with, let me say why I, at least, might properly be influenced in a lasting way by the 60’s—as contrasted with many of my university colleagues. If you are a chemist or a physicist or any sort of hard scientist or engineer your basic work was relatively untouched by the “issues” or spirit of student activism. But if you were a social scientist or a humanist, the challenge to the society or social structure embodied in the revolt of the 60’s touched you and your work more troublingly. The attack elsewhere on Jewish physics or on capitalist genetics —Mendelism-Morganism—was taken by us as a reductio ad absurdum of ideology, not taken seriously, not requiring noticing and answering. But the social studies and the humanities seemed more vulnerable to the charge that they were less rooted in the study of something real or “objectively out there,” but rather projected the merely subjective categories of conventional, biased, and even oppressive social or cultural interests; that education ought to free the mind from subjection to these received habits of perception, not to warp the innocent mind into their shape. This challenge to the legitimacy of social and humanistic studies placed a burden on the inhabitants of these academic domains not borne by the rest of the university, and made them more likely to remember the 60’s as not only a time of external troubles but as a challenge to the very basis of their professional lives. So I would expect students of society to be more deeply affected by the 60’s than, let us say, our chemists or engineers.
But for some—and I will now speak only of myself—there is more. Education, it may be necessary to explain, is essentially “initiation.” It is the process of initiating a child or a novice or a candidate into an ongoing social process. Learning the language, your mother tongue, is being brought into participation in a particular, existing, on-going social practice. Professional or vocational education begins with the initiation of apprentices; most education is initiation into, broadly speaking, vocational life. And liberal education, no exception to this rule, is initiation into the great political governing vocation. As I was, in one context or another, a teacher of introductory political philosophy, I was fated or doomed to encounter, head on, the generational revolt of the 60’s. Initiation into what? Into this awful society?
I hasten to observe what is easily and usually overlooked. Initiation is not merely “acceptance;” it involves both appreciation and the essential and constructive art of criticism. Appreciation and criticism are two sides of the same coin. I say this in the hope—a real hope that I am fairly sure is vain—of fending off the charge that initiation is merely indoctrination and that it involves the dulling of the mind that would otherwise dare to be critical. This is an ignorant view that fancies itself as sophisticated, sceptical, and even cynical. But if you want to play with it a bit, consider if we can initiate anyone into the musical arts—or turn out a music critic—without encouraging a love and appreciation of music, without which the criticism of music is silly nonsense. The same, believe it or not, goes for the great art of governing, of politics.
So the problem I had to confront in the course of teaching was that of finding the basis, theoretical and practical, of the initiation of the new, the maturing generation into the political–and I use this term broadly, and aware of the negative freight it carries—life of the society; as the bearers, the continuers, the inheritors of a great culture that seems to have been displaying its deep flaws to those to whom it was extending the invitation to join.
I will not repeat here what I have written about the experimental program that lived from 1964-1969 on the Berkeley Campus. This, of course, did mark me deeply. I cannot and do not want to forget it. I have not “gotten over it” and that admission is enough to justify the charge that I have not gotten over the 60’s.
But the point is that for some of us, working at a particular educational station, it was not just a time of disturbance, like an earthquake, a merely external shaking. It was a deep and inescapable challenge not only to the performance of the teaching function but to the very conception of the legitimacy of the function itself. The effect on me was to make me sensitive—or oversensitive—to the profound, inescapable, universal problem of initiation, of the generational succession, sometimes seen as generational conflict and even generational war. In short, the educational vocation—at every level—is doomed by the very nature of initiation to be deeply involved with the traumas of the generational succession. And nothing, in my lifetime, exemplified that confrontation more starkly than the campus turmoil of the 60’s.
I am not interested here in expounding on the concrete experiences of university life in those days, although it taught its lessons. I am interested in examining the more subtle effects on the structure of my awareness. As I have suggested, it has a great deal to do with the idea, the reality, of that fascinating fact of life—the generational succession. And I am surprised, now that I think of it, at how it seems to have taken over a good bit of my mind.
To begin with, there is an awareness of the precarious fragility of continuity. Any persisting human enterprise or institution rests on the establishment of habit, and the inertia of habit is enormous. Conservatism lives on the strength of the habitual, and any attempt to change a way of life is challenged, if not baffled, by the strength of the customary way of doing things. The forces of continuity are such that it may seem odd to speak of its fragility. And yet, if habit is powerful it is still necessary to constantly re-establish the habit. Skip a generation of habituation and the chain of continuity is broken, often beyond repair. “Carrying on” is dependent on the recruitment of successors, and when and if that fails something is ended. So the problem of succession, of the generational succession, is inescapable, vital. And it takes many forms.
Consider the relation of the creator to the inheritor. It finds paradigmatic expression in Milton’s Paradise Lost—to say nothing of Genesis itself. A Creator has wrested a domain from chaos, has organized it, established—or tried to establish—an order, presumably embodying a conception or plan that, in the mind of the creator, is good. It is a considerable achievement, a successful battle against chaos, a triumphant imposition of form, a marshalling of means for the sake of an end, a conquest over recalcitrance and inertia and centrifugal tendencies. The establishment of a purposive order.
But the immediate problem is continuity, the carrying on of the enterprise, the fulfilling of the intentions of the creator. So the creator must find heirs, successors who will lend their energies to the carrying out of a conception—someone else’s conception. The heir needs a special set of virtues, not identical with those of the creator. Meekness suddenly appears—only the meek can really inherit. Fidelity, submission to a vision other than one’s own. Loyalty, respect, even gratitude…unprodigal daughters.
The conflict or tension between the creator and the inheritor is a theme with many variations. It plays itself out in every family that grapples with parental plans and the children’s’ struggle for independence; in every institution struggling to keep alive a dimming vision as it turns its destiny over to recruits with deviating aspirations of their own; in every constitutional order bowing to the authorizing vision of the founders while seeking to adjust to a new order of things. Everywhere we look we see founders and successors, creators and inheritors, oldsters and youngsters caught in the precarious act of handing over, to carry on or to break away, to write the next chapter or to start a new book. Since we are mortal and inter-dependent we can never escape the great trauma of the generational succession although, of course, we may grow insensitive to its pervasive presence.
To not have gotten over the 60’s means, I think, that you see this drama playing itself out everywhere. It does, of course, but that doesn’t mean you have to keep harping on it. But for people like me—bruised by the 60s—harping on it is almost a way of life.
If there was anything characteristic of the 60’s it was the proud claim of the student generation that it was indeed a self-generated generation, immaculately conceived, not really, in its search for identity, the children of the American middle class, ungenerated by them. A generation trying to break with its generators, denying any debts, rejecting any heritage. And, even in some sort of defeat, remembering that at least it had engaged the established power “in dubious battle on the plains of Heaven [near the Chicago convention grounds] and shook the throne.”
But that is not the worst of it. Friends still shake their heads sadly over my sour refusal to see the Cordelias of the world—those naive flower children—as heroines. And why do I see Creon’s problem more forgivingly than Antigone’s? Or smile wryly at the sight of Creon’s idealistic son, in an amusing role-reversal, advising his father to bend and compromise? Or see the central trauma of Moses’ life as his setback in enshrining the Tablets of the Law by the seductive attraction of the Golden Calf—that desert Woodstock—and the decision to skip a generation in the hope of finding people worthy of a promised land? Or, much farther afield, insist that education cannot treat students as customers whose desires are to be satisfied but as apprentices to be initiated into ongoing enterprises in a generational succession?
All this, and much, much more, floods to mind when I consider the possible truth of the charge. Every great social crisis teaches its own lessons, expresses one of the great recurring facts of life. To forget or “get over” is to experience in vain. Some of us—a dying generation—can never get over the experience of Munich and appeasement; some can never get over Viet Nam; and some can never get over the great turmoil of the generational succession of the 60’s. To be unscarred, unmarked, unshaped by what we live through is the mark of a dulled consciousness, untaught by the curriculum of life. I could, I suppose, protest that I have learned other lessons, that the 60’s experience takes only its due place in my mind, that I am not distorted but only appreciative. But I won’t bother. O.K. I never got over the 60s…
Or the Thirties?
A friend, listening to my babbling about the sixties, remarked, with a patient shake of the head, “But have you realized that you have never gotten over the Thirties?” I had not realized it; it had never occurred to me; was it true?
The 30’s—what was there to get over? College from 1932 to 41. On a shoestring, waiting on tables, and then as a teaching assistant, always economically strapped. My memories of my college days—undergraduate and graduate school alike—are not happy ones, but not because of my economic situation, a not uncommon one in those days. But still, it seems a time of sullen discontent, really the most miserable time of my life. Why?
Clearly not because of material circumstances. I was living the usual two-track life of a healthy college student, inhabiting, like an amphibian, the world of sex and the world of ideas. Each had its traumas and its joys, nothing was unalloyed. But it was the realm of ideas that I recall as the domain of discontent. What was the world coming to? And, what was I to do? What happened then that I never got over?
What was I to do? Although a major preoccupation, connected with the state of the world, it seems in retrospect less a crisis of decision than a persisting overhanging gloom. I went through an array of familiar temptations in the vocational desert of college kids without capital. Although stirred by injustice, I would not go to law school, the goal of my argumentative friends. Nor to medical school, the goal of instinctive healers almost as difficult to reach, for Jewish students then, as the lovely sleeping maiden surrounded by a wall of fire. In my junior year, in the absence of anything compelling and no doubt inspired by my father’s diffidently expressed hope that I might become an agronomist and go to Israel—still in those days Eretz Isroel or Palestine—I actually transferred to the Ag School and for a whole year trudged out to the far end of the Madison campus to take courses like Animal Husbandry. I still remember the sketched outline of a Percheron—not unlike the horses pulling brewery wagons in Milwaukee—with arrows pointing, among other things, to something labelled “fetlock.” I stuck it out for a year, surrounded by farm boys who knew, as I did not, the difference between wheat and barley and had experienced crop-rotation. I had hoped it would offer an escape from the world of business and competition. I saw myself striding over ploughed furrows in fresh mornings wresting an honorable living from the soil, exploiting no one. But after a year I returned to more familiar depressing haunts and ended by majoring in labor economics, although no longer with the illusion that it was my destiny to lead the insensate labor movement of Samuel Gompers and Bill Green into its proper class consciousness.
I made one effort to escape from the gray vista stretching out before me, and it ended in comedy. On an impulse in my senior year I entered the Rhodes scholarship competition and although my academic record was a shameful mixture of grades my verbal antics overwhelmed the screening committee and I headed the list of candidates from the University. To a poor boy from Milwaukee the thought of going to Oxford was like being Dorothy transported by a gale to the magic land of Oz. I wallowed blissfully in the dream for a few weeks before I was brought back to reality by what I think of as “the fiasco of the purple suit.”
A year earlier, a friend of mine had actually won the scholarship, but he was a serious and accomplished scholar and had been carefully groomed by supportive faculty sponsors. What stuck in my mind was that he had been carefully outfitted in a tweed jacket with—this was stressed—extra-long sleeves. The day before the contest my supportive parents sent me to a good clothing store to buy a suit. The impressive clerk coolly waved aside my feeble murmur about something loose in tweed and led me to a rich dark blue suit—a study in quiet dignity fit for the English aristocracy. Oh for a level playing field! He sold a dozen suits a week; I bought one in five years. It was no contest. I was to pick up the suit in the morning and wear it to the University Club. But outside the charismatic presence of the salesman-enchanter, in the clear light of the next morning I beheld the truth. Sharp, double-breasted, ill-fitting, sleeves too short, shoulders only a bit short of Zoot-suit and—the crowning glory—not really a rich subdued blue but rather a garish purple. No doubt other factors came into play, but I have always attributed the debacle—my descent into cloddish demoralized inarticulateness—to my awareness of the purple suit, as fatal to my hopes as the shirt of Nessus.
But there is a difference between “getting over” and growing up or moving on. And if I have not gotten over the 30’s it is not because of great personal traumas. What could have happened to me? All my life I was either a student or a teacher, except for a period of four or five years of army life that separated the two conditions. I look back on the army years with more affection than upon my years as a graduate student. My only immersion in a non-academic culture. I remember the great moment when I turned in my civilian clothes for regular army draftee clothing—stripped naked for a new life, a new beginning—like Joseph, indeed, emerging without his shirt of many colors from a hole in the ground. But I bear no scars, and there is nothing I want or need to get over that great episode.
But there is, I suppose, something thirtyish I never got over. It can be summed up in a single word: Munich. And since my political state of mind owes much to Munich and the 60’s I should sketch what “Munich” signifies. The domestic politics of the 30’s was about what to do about the Great Depression. For me, in those days, Norman Thomas, a civilized democratic socialist, seemed to be right. But I voted for Roosevelt, although I regretted that he was merely an improviser without—alas!—a clear guiding ideology. Still, the New Deal and all that was better than the proposals of those to the left of Thomas who flirted—absurdly, I thought—with revolutionary hopes.
But foreign policy was claiming my attention. In those days we were as yet undeceived by Orwell and Hemingway and rooted for friends in the Lincoln Brigade, watching in horror the looming triumph of Franco. And there was Mussolini. But dwarfing everything was the challenge of Hitler.
In spite of the fact that I am Jewish—deeply although secular—and was perturbed about Hitler, I was, although far from neutral as between Germany and England (or rather, the British Empire), I was, in the polemical categories of the day, something of an isolationist. My heart was not really in it, but there I stood. In the lull between the two world wars I had grown up with, absorbed, the bitter anti-war sentiments of war heroes like Sigfried Sassoon, had felt horror at the shameful and futile butchery of trench warfare, the stupidity of Generals, the dubious legitimacy of the cause of the Allies—empires struggling against challenge—and, as the crowning fact, the injustice of the treaty the victors imposed upon Germany, and consequently our culpability for the despair of Germany and the “understandable” appeal of Hitler. Who could read Keynes on the Big Four, on the pervasive venality and stupidity, his suavely vicious attack on Woodrow Wilson, without coming to the view of a wronged Germany victimized by France and England aided by a hapless and naive America. This sense of a wronged Germany, reinforced by the desire to refrain from being lured into once again pulling the chestnuts of Empire out of the fire for the British, cooled the zeal to intervene against the ominous German threat. So there they were: a rotten British Empire, a wronged and raging Germany, a Soviet Union, the kidnapper of a utopian hope, now bogged down in brutal tyranny.
Nevertheless, for me, the great evil was Hitler, although I don’t think the horror of what was to come was seriously realized. The prospect of defeating him seemed hopeless—all those tanks and planes and goose-stepping battalions. We would have to do it. We—not the hopeless French or the feeble English or the purge-riddled Russians. Lindbergh was telling us how irresistible the Germans were. Why get into that mess (except that Hitler…) That I would probably be drafted no doubt added an element of opposition to intervention, but I really don’t think, although I may be wrong, that that played the major part in determining my position. I was for staying out (but I couldn’t get Hitler out of my mind). I studied and knew more than my friends about the Neutrality Legislation that was the contribution of William Jennings Bryan, Wilson’s Secretary of State, to our foreign policy. I pounced on every sign that Roosevelt was trying to get around it; I was convinced that he was trying to get around it, and my indignation about that overwhelmed any attempt at judging whether the policy of intervention was wiser than the neutrality or non-intervention or isolationism proclaimed by the, to me, unsavory elements rallying around “America First!”
It is odd to realize that my intellectual and moral quandaries were not resolved by the triumphant work of the mind but by external events. I was drafted into the army before Pearl Harbor, for what was thought of as a year’s interlude. I didn’t really mind, since it only interrupted my graduate studies in philosophy, studies that I found quite unsatisfying. Six months later came Pearl Harbor and the rather surprising declaration of war on us by Germany, so we were in it and all doubts were transcended. From that moment I had no doubts about the rightness of our cause or what we—America—had to do—a happy condition that lasted while I was in the Army to the end of the war.
So not to have gotten over the 30’s means for me that I have retained to this day the beliefs that, in internal affairs, the government should intervene in altering the habitual institutional structure on an ad hoc, tentative basis, curbing or guiding economic activity in the name of the public good; and that internationally, we should avoid appeasing irrational tyrants and be prepared to use force, if necessary. During the cold war I was, predictably, a believer in “containment”. But with the end of the cold war I parted company with allies who, to my annoyance—to my disgust even—embraced the marketplace view of life and, in extreme cases, discovered in government the great enemy of the people. In the struggle that seems to be shaping up between politics and economics, between government and corporation, I am generally on the side of government or, more accurately, I remain a stubborn believer in the great art of politics that, with all its corruptions, still represents the cooperative habits of human society, the rites of the public realm, the Quixotic attempt to transform mere power into legitimate authority. I do not believe in salvation through privatization and competition. I have not gotten over the Thirties. I have not gotten over the Sixties. And I have no desire to.
Defending a Myth
What strikes me, looking back, is my odd resistance to enticing forms of insanity. In the thirties I read Marx and the Marxists, of course, but my inoculation as a student of Selig Perlman kept me forever from embracing the faith in salvation by communist revolution. I was always an anti-communist, although of the calmer sort who was not an ex-communist. (I do not consider the feverish confessions of betrayed lovers as revelations of the truth. Whittaker Chambers and others—I loved her but she turned out to be a whore!)
In the sixties, resistance to the seductions of the New Left presented no problems at all. I considered my San Diego colleague, Marcuse, a pretentious and destructive fool. On campus “the movement” played out scenes from Dostoyevsky’s Possessed, activists seemingly unaware that beneath the despised conventional political crust throbbed the impatient hunger of a brutal “right” and that, if it came to force, the wrong people would win. I did not take the New Left seriously and at the height of the campus turmoil occupied myself with an experiment in liberal education that was both educationally radical and spiritually conservative.
But while I avoided infatuation with the dogmas of the revolutionary left, I confess to a lifelong prejudice that might be regarded as leftish, although I do not regard it so: I am an anti-capitalist. I do not mind the marketplace when it knows its place in the marketplace. I detest its elevation into the great paradigm of social life. The smugly perverse conception of the mind as a marketplace of ideas infuriates me. The notion that the global marketplace is, finally, the fulfilling of the human heart’s desire, transcending mere politics, is the monstrosity of the stockbroker’s mind writ large. I do not want the profit motive to dominate the world. I take no comfort in the promise of a business civilization. But I do not mean to develop this theme here. I mention it only to stress the fact that in rejecting the seduction of the left I found no comfort—as the New Right found—in the inviting embrace of Capitalism.
So what was left? If, for the moment, we consider politics as merely a procedural framework that other forces seek to dominate, what offers itself as an alternative to economic interest or the economic class struggle? There is religion, once thought safely dead or slumbering, now stirring itself, at home and abroad, into action in militant forms to the surprise of those who thought that the “enlightenment project” had settled all that. And there is race or ethnicity, or whatever we should call it, emerging with surprising power, unexpectedly, considered an archaic prejudice that really has no business on the modern or post-modern or post-post-modern scene. Religion and race—can you believe it?—still making impertinent claims on the brink of the new millennium, challenging triumphant materialism and economic determinism.
No comfort for me there although, I confess, I welcomed the assertion of forces that could challenge the almost taken-for-granted supremacy of the economic. No comfort, because I could not, unlike Ishmael, nail my little banner to those masts. If I were religious, I would go to a temple or a synagogue, but I do not consider “Jewish” a purely religious category and, although I hesitate to say so, I do not consider myself very religious or even religious at all. As for ethnic identification—and it is obviously in this sense, with all its fuzzy borderlines, that I consider myself and am a Jew—I do not regard ethnic consciousness or identification as an irrational barbaric habit to be discarded when we reach the heights of pure individualism, standing uncategorized as an untainted self. I have a rather apprehensive respect for ethnicity, for the warmth of ethnic identification and association, for its adamacy in the face of the old Stoic cosmopolitan ideal of unmitigated humanity. It presents deep problems, as affirmative action reveals. We want to transcend something that is undeniably there, something that is not without its value, but something whose persistence troubles the humane mind.
So, what is there to be when, in economic terms, you are committed neither to a command economy nor to the free market, and when you do not find salvation in religion or in race? In my case, a devout belief in our constitutional political order—an order that stands on its own feet, compatible with a broad range of economic policy, from forms of socialism to forms of capitalism. Hospitable to the varieties of religious experience without—in spite of some views—needing a religious basis. Compatible also with a broad range of ethnic policy, from complete integration and ethnic blindness to acceptance of a variety of ethnic clustering.
I am, in short, although I have devoted much of my professional life to political philosophy or theory, a simple, possibly naive, believer in the American Constitutional Order. I owe it the loyalty of a voluntary commitment. I enjoy rights under its aegis. I am bound by the obligations it imposes. It is the framework within which legitimate political energies are to be deployed. It is, above all, the great unifying force underlying the vast diversity embraced within the United States.
But to take this seriously imposes some burdens. Although what I have just described comes close to being the standard American creed its status within the American academic world is that of a “myth”–a suggestive story not to be taken as literally true or, if so taken, a kind of political fundamentalism forgivable in peasants, not in the intellectual world. Political “enlightenment” displaces the familiar myth by a more realistic description of the interplay of self-interest and “power.” “Consent” becomes mere habitual acquiescence, “obligation” a shadowy “moral” notion. “Realism” displaces the simple-minded habits of normative idealism.
My self-imposed burden has been the thorough-going defense of our governing myth. Even as I say this, I am struck by how much of my intellectual and teaching life it sums up. I have been and am a defender of the faith, a defender of the naive against corrosive sophistication, resisting the seductive appeal of street-wise “realism.”
To begin with, I was, quite early, preoccupied with the problem of consent, with giving meaning to a moral commitment to membership in the concrete political community—not consent as merely an aspect of a hypothetical model. My first book, Obligation and the Body Politic, developed my version of an “agreement” theory of the state, drawing heavily on theorists like Hobbes and Rousseau, summing up years of reflection and teaching.
I have no doubt that I have agreed to abide by, to live under, the Constitution, that I am a voluntary member of the polity and, I believe, most citizens of the United States, if asked to consider their own status will agree that they are consenting members. But there is lots of resistance to the idea. “When did I agree?” is a common challenge, and even the production of a signature on the dotted line does not still protests. I have always found this an interesting and illuminating question exposing the roots of political commitment, but I will not pursue it here. It was, as I’ve said, an early theoretical and educational preoccupation. And it may reflect, as I look back, an over-emphasis on the dependence of “obligation” upon “consent” or “agreement,” caught up in what Maine called the progressive movement from “status to contract.” In my fuzzier old age I find myself more hospitable to claims that obligations are generated by other things than consent, although consent still serves as a paradigmatic case of obligation-generation.
It is obvious to me now that to a believer in the political covenant it was natural to study the covenant itself, the Constitution. And, perhaps unaware of the obvious connection, I spent years in the study of constitutional law and judicial theory—a study that has all the fascination of casuistry, of hermeneutics, the art of interpretation, an art developed in any culture bound to square conduct with a sacred governing text, an art illuminated by deadly satirical masterpieces like Pascal’s Provincial Letters.
I cut my teeth in a collaboration with my guide into the world of law, the gifted blind professor of constitutional law, Jacobus tenBroek. We spent a year on an analysis of the Equal Protection clause of the 14th Amendment, in the course of which I became an addicted follower of the work of the Supreme Court, acquired my own set of Supreme Court reports and for decades read the court’s opinions. While I found a broad range of problems unexpectedly fascinating, I was especially interested in the civil liberties, free speech, or First Amendment area. Eventually I wrote a book about the relation of government to the mind, committing the unforgivable sin of mentioning both in the same sentence or paragraph or neighborhood. I find, on looking it over a quarter of a century later, that it says most of what I would want to say about freedom of speech, although I seem to have wasted a good deal of time lately in trying to find more to say on that subject.
But my final concern with the Court and the Constitution has been with the problem of judicial “activism.” The standard model of the Court (the Supreme Court) in our system is that of an essentially non-political referee. Its pronouncement that an act of the legislative or executive branch is illicit or “unconstitutional” is a declaration that some relevant rule has been violated, not a declaration that the Court, in its wisdom, is imposing its judgment of policy on the situation. It is not supposed to be, as are the other branches of government, “political.”
But merely to state this, except insincerely on ceremonial occasions, is to evoke the knowing smiles, if not the outright sneers or Homeric laughter, of the enlightened, the disabused, the realistic cynics who see through all that, who know the inside story; whose own youthful idealism was cured by clerkship or an equivalent spell in the sausage-factory of judicial life. Politics! Politics! Justices in their robes are simply politicians in drag!
As usual, life is easier for cynics or “realists.” They are out from under the burden of making sense of a governing myth. They may have some difficulty in explaining, in a satisfactory public way, what it is that a judge is supposed to do, how an Opinion differs from a Brief. But they manage well enough, aided by the habits of obfuscation, unintelligibility, and philosophical simple-mindedness.
Predictably, in retrospect, I took the rockier path. I defended the myth, the fundamental distinction between the judicial and the political, without the simple-mindedness of being a “strict constructionist.” The problem, here as elsewhere, is that of defending a basic myth (itself a notion of great complexity) without falling into the assumption that a fundamentalist reading is the correct reading, or at least holds a preferred position. This is a problem of great significance for the theory and practice of constitutionalism and governmental legitimacy. But I will not pursue it here. I have discussed it in my unpublished article—rejected by all the leading law reviews—on Judicial Activism and the Rule of Law.
It seems characteristic that my teaching activity should focus not only on the conception of education as initiation into some ongoing activity or vocation or culture, but that I should think of “liberal” education as initiation into the great political vocation, the ruling vocation. Although this subject is very close to my heart and one which I discuss endlessly at the slightest excuse, I have written enough about it (Experiment at Berkeley and The Beleaguered College) to enable me to curb the impulse here.
A Word on Plato
Most of what I have said above indicates the pervasiveness of the influence on my intellectual life of a dominating conception of the “political”—from social contract theory, constitutionalism and the law, to liberal education. But I have not yet mentioned another influence that, although deeply political, falls quite outside the range of this tradition—not a contract theorist, and not, challengingly, a democrat: Plato.
I will not even try to sum up what Plato has meant to me. From my undergraduate encounter with the Apology and Crito and The Republic to the present he has been a haunting presence. For years I taught The Republic in the introductory philosophy course. It was a departmental tradition, not my own discovery or creation, and I entered into it with growing appreciation. And today, more than a decade after retirement, I cannot forgive the department for irresponsibly abandoning that tradition.
I am not sure why, in spite of his importance for me, I have not written or published anything about Plato. Perhaps it is because of his importance to me, and perhaps because of my shame at knowing no Greek. But I will try to make a few points here of a peripheral sort.
First, using both a contract theorist and The Republic in an introductory course I became increasingly aware of the radical difference between them. Why did The Republic have so little, nothing really, to do with the ideas of rights, duties, obligations, legality, legitimacy, that loomed so large in the works of Hobbes, Locke, or Rousseau? The short answer that eventually grew upon me is that The Republic is not a book about political theory or philosophy, not an analysis of the conditions or limits of political legitimacy. It is an essay in “human obstetrics,” on bringing the human adult to term, on what happens in the second womb—the marsupial pouch—the polis within which we develop our characteristic selves, our cultured natures. It therefore invents psychology, sociology, educational theory. If it is a book about government it is about the government of children, about the great formative period of human life. If this is a bit enigmatic, I will leave it with the suggestion that the art of government has these two related but distinct aspects. Beyond that, one of Plato’s great gifts to the world is his presentation of the figure of Socrates, the Socratic Gospel according to Plato. One of the unique individuals of Western history—great, original, who, nevertheless, was in no sense an “individualist,” who considered himself a son of Athens, a functionary, a midwife, a gadfly, an unpaid teacher working for, deeply rooted in, the City; the enemy of sophistry in any of its recurrent incarnations; giving his life, bafflingly in the end, in the midst of discourse, as a sacrifice to the foolish majesty of the Law.
But the disturbing influence of Plato is, for me, the fact that he is not a believer in democracy. Governing is, for him, a necessary function and, like any function, he thought it should be performed by those who are best fit for it. Not by those whose bent was for power or glory or wealth or producing and consuming, but by those whose possession of knowledge of “the good” fit them for the practice of the art of governing—the art of guiding life by the vision of the “good,” the proper end of intentional activity. Governed, in short, by wisdom.
Plato traced the path of degeneration from the government of wisdom through the government of the heroic, the government of the rich, of the producer or artisan, and ultimately of the consumer in its restrained and unrestrained modes—from wisdom through the anarchy of desire. In a brilliant presentation of the range of types of polity in which the departures from the reign of wisdom are seen as forms of corruption (or sickness), Plato grants that, corruption for corruption, the democratic reign of desire, of the consumer, is the least harmful. This is a sort of backhanded compliment to democracy—if we assume the operation of a sort of law of inevitable corruption, then democracy is the best, the least harmful, of the corrupt forms.
Needless to say, this “justification” of democracy gave me little solace. One might grant that compared with the horrors of a theocracy gone into fanaticism, with a corpse-littered Napoleonic addiction to glory, with the heedlessness of a plutocracy to the horrors of the life of the unpossessing—compared with these, the mere excesses of a non-judgmental consumerism, the democratic equality of desires, seems the least of proffered horrors. But this hardly solved my problem.
The problem was how to make sense of the theory—the essential core of democracy—that each citizen in a democracy, in addition to his ordinary functional occupation, was to participate, was fit, or potentially fit, to participate in the great and crucial art and function of ruling. This was not, indeed, to simply raise a clamor for what he wanted, to squeak like a wheel needing grease, but to deliberate about the common or public good and act to promote it. To marshal wisdom to the direction of public affairs. Wisdom? Did the common man have that? Knowledge of “the good”?
The obvious move, in the light of recent fashion, is to do something with “value judgments.” The upshot of which would be that assertions about “goodness” are neither true nor false, or are equally true, or “relative” or, in one way or another, not subject to the judgment of “reason” so that the views of the common man about what is “good”—whatever his functional station—are as worthy of respect as those of any self-appointed intellectual elite. Thus, the case for democracy is made to rest on a skeptical relativism and an egalitarianism that counts each person as “one” and accepts the quantitative weight of the majority as determining the public good. The “good” is what I want; and more I’s outweigh fewer I’s; and anyone can play—so majority rule, that is, democracy.
Obvious as this line of argument is for anyone exposed to the élan of Logical Positivism and Value Theory in the course of his education, I am not happy with it, do not take it seriously. Anymore than the view that I found widespread that “skepticism” was the necessary philosophical basis for a humanely liberal democracy—in spite of the obvious example of Hume who was both a skeptic and a political conservative. But I will not pursue this argument here. In the end, I declined to take this path.
What then was left? How reconcile my love of Plato with my devotion to democracy? Where, in not being a democrat, did Plato go wrong? And the answer was to be found in the vicinity of that kind of knowledge called “knowledge of the good”—the wisdom needed by those bearing the burden of governing human affairs.
Where did Plato go wrong? What a question! Was it Emerson who wrote on a student paper submitted to him by Oliver Wendell Holmes, Jr., in which the future Justice attacked Plato, “If you strike at a king, you had better kill him!” Where, I gird myself to say, did Plato go wrong?
His mistake is to think that “knowledge of the good” is the highest rung on the cognitive ladder, the ultimate specialty. Whether it is a mistake or not—and I think it is not—I will consider the knowledge required for ruling, “knowledge of the good,” as the same as “wisdom”. And my assertion is that wisdom is nothing like a cognitive specialty, to be acquired after an immersion in mathematics and the exercise of dialectic. Being wise is more like being balanced than like knowing mathematics. Wisdom is more like common sense than like cognitive brilliance.
In fact, “common sense” is the key to the mystery. Everyone, more or less, has some, more or less. It is a natural part of the equipment of a “rational animal” –almost a survival necessity. Very much like a sense of balance. It can be developed, weakened, strengthened, and when it is strengthened or developed it is recognized as wisdom. Unlike any particular cognitive art it is the common possession of common humanity and its development is independent of the development of our intellectual specialties.
In fact, our whole concern with a range of civil liberties—speech, assembly, press—can be seen as protecting the activities by which common sense can develop into wisdom. This is the real justification for free speech, deeper than the view that we are guaranteed a right to “express” ourselves, and it is the basis for the criticism of our habits and institutions of communication as they fall short of serving that function.
So the mistake I daringly and probably foolishly attribute to Plato is that of not seeing that the necessary ruling wisdom is an extension of common sense, a common sense heightened by the arts of discussion, and not a cognitive specialty like mathematics. This, at any rate, is how I accommodate my love of Plato with my commitment to democracy. I actually believe that wisdom is more like common sense than like “policy expertise,” and my faith in the possibility of democracy rests on that. Socrates, whatever may be the case with Plato, is not at odds with that.
“Academic Debris” is a collection of unpublished writings completed between 1993 and 2002