Critics of liberalism, whether from the conservative right or the collectivist left, long have bemoaned its tendency to atomize society in the name of individual autonomy. The collectivist critique long has been recognized as somewhat ironic, of course. The utopian visions of ideologues like Karl Marx, who promised his New Socialist Man a life in which he might pursue his interests of the moment in peace and security, are nothing if not individualistic. Yet there long has been recognition that liberal societies value “the individual” above various forms of community. Moreover, as liberalism has become increasingly secularized, it seems clear that the individual has come to be seen as a being whose dignity requires maximum autonomy so that he may freely choose his faith, his values, his vocation and, in sum, his own way of life.
That said, it is wrong to see contemporary liberalism as fundamentally concerned with autonomy. Recent outbursts of intolerance toward those “on the wrong side of history” concerning the institutionalization of gay marriage are more than just aberrations; they show the increasing extent to which the politics of autonomy have been transformed into the politics of ethnic, racial, gender, and transgender recognition. American politics no longer are principally about “liberating” individuals from the customs, traditions, institutions, and communities that once bound them with common norms. Today, American politics principally concern debates over which groups the government should grant how much official recognition and favor and just how far the government should go in enforcing its declarations of increased status for those groups.
The process through which our culture and politics have gone to reach this point is one of classic secularization and atomization, powered by a government charged with “freeing” us from history, want, and danger. In classic Tocquevillean fashion, the mediating structures that once bound Americans to one another were undermined over time by a central government intent on taking over the roles of communities. Franklin Roosevelt’s declaration of four “freedoms,” including, of course, freedom from want and fear, signaled that the central government would now have the power and will to tend to what it saw as the needs of the people. As Tocqueville feared, the resulting intrusive and paternal state over time took over the functions of various social, religious, and local communities. Cultural atrophy followed, leaving individuals at the mercy of the central government for ever-more of their lives, whether it be through regulation of their farms and businesses or the dispensation of various forms of governmental largesse.
But this atomization did not bring with it the kind of retreat into family life Tocqueville envisioned—at least not universally. Some Americans, principally white Protestants, did indeed simply retreat into their own TV rooms, leaving public life to tend to itself. But Americans from a variety of ethnic backgrounds continued to lead more communal lives, and to find in their very communalism a path to a certain kind of power. And, as those without a strong sense of community retreated from public life or cast about for some purpose to their privileged positions, promotion of ever-greater autonomy gave way to promotion of ethnic and other identities through government programs and rules.
America not being a land of mere individuals, the struggles of community life and competition began long before the New Deal. One might note, for example, Catholic immigrant fights to secure local funding for parochial schools during the early nineteenth century. Not wanting their children to be catechized into Protestant beliefs and practices as was the norm in “public” schools at the time, Irish communities in particular sought and sometimes secured support for their parochial schools. The backlash, however, was significant, and furthered the secularization of public life. More generally, immigrant groups succeeded in securing political power in many urban areas. Building on vibrant social communities, the political results nonetheless included significant corruption troubling to most Americans and by the late nineteenth century helped spawn the Progressive movement that would undermine meaningful local government of all kinds.
Black Americans during the late nineteenth century, of course, found themselves in a particularly vulnerable position; they saw the rights promised them in the Fourteenth Amendment to due process and equal protection of law and the privileges and immunities of citizenship denied to them. This meant they not only could not participate in most political life but even were denied access to the courts in much of the United States. Denied legal protections at the same time they were subjected to legal and extra-legal disabilities, black Americans formed a plethora of organizations to help them tend to their own individual and communal needs, from burial societies to religious and healthcare organizations. Local, race-based communities flourished, but as unrecognized, legally penalized sub-groups.
It was in this context that the Progressive movement centralized political power beginning in the early twentieth century. Progressive programs were instituted, partly in the name of “clean government,” but mostly in the name of efficiency and expert rule. This did not mean the elimination of ethnically based communities, but it did further focus most Americans’ attention on governmental powers and what they could bring. Black Americans, of course, were excluded from political participation during this era and so were left largely to fend for themselves in their own segregated communities.
These conditions were made more extreme by the centralizing governmental policies of the New Deal. As important, by the 1950s the civil rights movement brought pressure to bear on all branches of government to end the worst abuses of Jim Crow. Largely instituted at the federal level, sometimes by the courts, other times by Congress or the executive agencies, new policies pulled in quite different directions. Some sought to increase individual opportunity by eliminating legal and political disabilities imposed on black Americans. These policies could be seen as in keeping with Martin Luther King, Jr.’s call for an America in which each individual would be judged according to the content of his character. Other policies sought to free individuals from various forms of discrimination in ways that “equalized” a variety of communities—requiring, for example, that universities, non-profit organizations and most other communal groups with official status cease discriminating against racial minorities. And, finally, there were policies requiring “affirmative action” to compensate for past discrimination.
These last policies had their roots in the economic assumptions of the time. Best known to social scientists through W.W. Rostow’s theory of “take off,” these theories centered on the idea that poor groups (and nations) could not themselves begin the work of political and socio-economic advancement without government help. Various forms of infrastructure were needed so that these groups could mobilize and muster resources sufficient to sustain their own long-term growth. Thus, extra help through outreach and preferential treatment in hiring and contracting initially were presented as short-term measures that would enable minorities to get their footing in a system that until recently had been organized in a fashion hostile to them and their interests. Combined with Lyndon Johnson’s “war on poverty” and other “Great Society” programs, affirmative action was sold as a means of leveling the playing field and fostering greater advancement and prosperity for all.
The programs did not work. Throughout the world, “assistance” fostered gamesmanship among those in positions of power, along with a culture of dependency and disappointment among the intended beneficiaries. But when it became obvious that the Great Society was a failure, the response was not to regroup, but rather to redefine and expand the role of government. In the case of affirmative action this meant development of new criteria that would redefine the goals of all government programs, and even of private sector policies, to bring them into a wider ideology dubbed “diversity.”
The diversity regime made its first inroads into public policy through the notion of “disparate impact.” The favorite weapon of former Attorney General Eric Holder in forcing various public and private organizations to allow federal oversight and reconstitution, disparate impact began as a tool of affirmative action intended for very limited and narrow use. Over time, however, as outreach programs failed to produce the desired outcomes, federal agencies and courts began looking at the “impact” of racially neutral policies to see if they had a tendency to “produce results” not in accord with elite assumptions regarding how many members of which races should be present and promoted in various professional and other groups. Thus, for example, employers were forbidden from requiring high school diplomas for employment, or using a variety of longstanding tests for promotion on the ground that they had a “disparate impact” on racial groupings. Employers, businesses, and agencies involved in various forms of contracting responded by adopting quotas intended to appease the regulators. When the quotas were too overt, however, they would be struck down by courts. What, then, were regulators and activists to do? A new argument was needed. It came in the form of “diversity.”
This narrative may appear to be an argument that affirmative action and diversity were programs and arguments specifically aimed at black Americans. This is not the case, of course. Women and Hispanics in particular were included in many of these programs and arguments. Moreover, the logic of diversity applies to as many subgroups as one can think of. For diversity is not about compensating for past injustices, it is about achieving a particular kind of society through promotion of a certain ideological approach to racial, ethnic, gender, transgender, and potentially any other grouping with political salience—excepting, of course, political viewpoints.
I leave detailed discussion of diversity for a future post. Here I merely point out that diversity allows for infinite expansion of the logic of affirmative action even as it transforms that policy’s logic into one clearly and consistently concerned with a specific endgame, rather than mere assistance in leveling the metaphoric playing field of professional life. The argument for diversity is that people of different backgrounds, including most importantly different racial backgrounds, have different experiences, causing them to have a different perspective on issues of the day, and on the problems face by employees, customers, the public and, in the most diversity-conscious field of education, students. The varied needs of “clients” require, according to this argument, a broad range of perspectives which can be satisfied only by an employee/teacher base that includes a “critical mass” of varying backgrounds.
Diversity does not mean diversity of viewpoint per se. If it did, we would hear more about the fact that elite universities in general harbor less than a handful of supporters of Republican candidates—most of them libertarians, rather than conservatives. Diversity is a vision of society and government focused on groups deemed worthy of affirmation and support from powerful institutions; it is about achieving an ideological consensus that those groups possessing a certain amount of political importance should be recognized for this importance and should receive concomitant benefits. The logic and policies of this ideological program will be the subject of my next post.
Books on the topic of this essay may be found in The Imaginative Conservative Bookstore.