Skip to content

Change is hard. Adaptation to changing conditions is hard enough without the burden of misplaced, scary, apocalyptic language.

After doubling down, Clay Christensen has tripled down. This is a familiar ploy: if you say something doubtful, then repetition will make it come true. In the words of one critic, Christensen is to business what Malcolm Gladwell is to sociology.

Christensen and Michael B. Horn, the former the apostle of disruptive innovation and the latter his St. Timothy, recently repeated their claim that 50% of colleges would fail in the next decade, or 10-15 years. Their goal line has been reasonably consistent: in 2011 it was "as many as half," (we're 8 years in) and 2013 it was 25 percent in the next 10 to 15 years (6 years in); in 2017 "as many as half" within a decade (2 years in); in 2019. Christensen and Horn write that "some college and university presidents . . . tell us in public and private settings that they think the 50 percent failure prediction is conservative -- that is, the number of failures will be far higher." Names? Places? and does executives saying that make it so (is this presumption of their predictive competence warranted)?

But the reasons for this prediction keep changing, and there's the ploy: save the effect but change the cause. One is reminded of the late Sydney Brenner's Occam's Broom: "sweep under the carpet what you must to leave your hypotheses consistent." The reason for such precipitous closures, foreseen (in 2011) in the period 2021-2026 were disruption due to innovative technologies: the subtitle of The Innovative University is Changing the DNA of Higher Education from the Inside Out. The main idea is that "in the DNA" of American colleges and universities is a desire to be like Harvard: wealthy and comprehensive. By contrast Brigham Young University/Idaho exhibits a completely different, innovative strand of DNA: "in how it serves students by a combination of distance learning, on-site learning, and lower-cost alternatives to residential college" (quoting my review from November 2012).

Implicit in the metaphor of DNA is a certain determinism: you cannot change your own DNA, after all (without extremely powerful, still-developing technologies that occasion many moral questions). Your DNA determines, in this view, what you will be: such "Harvard" DNA will be a fatal flaw in many colleges and universities, according to Christensen and Eyring. Can you really change your DNA from the inside out? It's a clumsy metaphor: no wonder Christensen has abandoned it.

That 2011 book also quite ignored several inconvenient facts about BYU/Idaho. Mormons get a considerable price break there that "Gentiles" do not receive --pointing obviously to a hefty subvention from LDS sources. BYU/Idaho is, after all, a satellite of a powerful, wealthy, comprehensive mother ship in Provo, Utah: the satellite campus can hardly be a synecdoche, assuming that what is true for the part is true for the whole. The economics of BYU/Idaho and the considerable technological subsidy its online instructions receives from the mother system is simply left out. Apparently Occam's Broom works well in Idaho and Utah.

Is Christensen's central claim true, that the DNA of American colleges and universities propels them to desire to become Harvard. Is that really true? What about those institutions that could have become Harvard (or Michigan), and chose a different path? Did a liberal arts college that chose to remain a liberal arts college necessarily thereby fail its DNA? Ask too many questions, and the whole edifice collapses.

Christensen's account and predictions rely on a very superficial knowledge of the history of higher education. That lack of knowledge allows him to claim that nothing has really changed in American higher education in 150 years. How about women's education? (--one of Christensen's real blind spots). How about public community colleges? A comparison: religious groups in the Abrahamic traditions often work by one person with some kind of authority meeting and talking to others who are supposed to be instructed. Isn't the real point what they say? --words with profound differences that mask a reductive similarity of communication. Closer to home, James McCosh (of Princeton) and Clayton Christensen (of Harvard Business School) both stand or stood in front of others and talked: is that really the sum of development between them? Again: Occam's Broom.

In their 2019 opinion, Christenen and Horn change the goal-posts: now the cause of instructional failure will be changing business model driven by implicitly disruptive technologies. Does anyone remember the educational TV boom in the 1960s? That also was a changing business model. Of course business models are changing, and have changed in the past: again, consult a deeper understanding of American higher education. All current debates about business models, missions, curricula, and the needs of students have a very long history, such as Crisis on the Campus by Michael Harrington (ca. 1951), or the famous General Education in a Free Society (1950). Students have never been mere passive recipients (although the contemporary view of them as "consumers" drives them to passivity). From the colonial to the denominational colleges, land-grant colleges, public universities: the business models have evolved. The business model in higher education is changing as we speak.

In other words, Christensen and Horn present the same tired and superficial nostrums of ten years ago. Even the historical examples in his ur-text, The Innovators Dilemma (1997), are questionable. Predicting the apocalypse is an old business.

I have argued elsewhere that the fearsome language occasioned by "disruptive technologies" has origins in the pre-Millennial Restorationist theologies of early 19th century frontier America, especially the Burnt-Over district of upstate New York, and showcased pre-eminently by Joseph Smith, Jr. There's a reason that his Latter-Day Saints are latter. Christensen was formed in a community that stands by Smith's proclamations. I do not pretend that he has smuggled theology into business, but rather that there is an elective affinity between such mainstream LDS thinking and business disruption: Joseph Smith Jr. was supposed to put the rest of Christianity out of business (the "great apostasy" from the 1st century to 1830). How has that worked out for for the Latter Day Saints? (--and nevermind the very current cosmetic name changes).

I continue to wonder whether discussions of disruptive innovation in higher education are in fact a cloak for expediting other changes, less technological but no less disruptive, initiated by senior academic leadership. To be a disciple of "disruptive innovation" means you're a member of that club. Is this called group think? Has it served GE well?

So will somewhere around 50% of American colleges and universities fail in the next decade? This might be the case, but not for Christensen's and Horn's (and Eyring's) reasons. In their recent post in Inside Higher Education, they cite situations in New England. I live in Connecticut: this is daily reality for me. Demographics are shifting: colleges and universities in the northeast quadrant of the continental US are going to have a hard time on that basis alone. Some have already closed, more probably will --but not because of changing business models driven by disruptive technologies. Demographic change exerts a constant pressure not unlike climate change. The real question is, who can adapt, and how well? The population probably will not achieve replacement rate, even in the southwest.

American higher education may be entering a "perfect storm" of demographic change, economic turmoil, and moral and cultural drift or outright corruption (e.g. the recent admissions scandals). All of this is cause for deep concern; none of it depends upon the snake-oil of "innovative disruption" via technologies that power changing business models. For many institutions, strategies of differentiation based upon price point, purpose, and location will matter a great deal. Strategies based on pure number crunching accompanied by credulous faith in technologies will probably not work. Online education is here to stay, but it will not disrupt on-ground education as much as non-technological demographic trends will. This is not the stuff of disruption, but of long-term anticipated and unanticipated consequences of historical change, about which Harvard Business School professors have no more particular expertise than anyone else. When disruptive innovation gets dumbed down, it isn't disruptive anymore, but just change.

Change is hard. Adaptation to changing conditions is hard enough without the burden of misplaced, scary, apocalyptic language.

Newport Digital Minimalism +-+4460807056_140Digital Minimalism, by Cal Newport. Random House, 2019. 256 pages.  ISBN 9780525542872  (Other sources cited in this article are listed at the end.)

Minimalist –anything spare or stripped to its bare essentials.  Minimalism became a cultural movement, then a social commitment: live with less than 100 things, or Marie Kondo’s things that spark joy. Is digital minimalism a contradiction in terms? Is it relevant to higher education, or is it another so-called luxury like “liberal arts education?”  Read on.

  1. Digital Minimalism

The present is a digitally maximal time –but it’s all very vague.  What’s good is great and best in huge quantities: maximum use of digital media (and social media in particular) to give “people the power to build community and bring the world closer together” (—Facebook’s mission statement). How? Just connect, share, and like, and somehow good things will happen.  “You never know, maybe you’ll find this useful” – one of the weakest sales propositions ever.

Anything or anyone that seems to resist digital maximalism risk the label “Luddite,” a dismissive reference to 19th-century weavers who destroyed machines to save, they thought, their way of life.  Newport is no Luddite: he’s a professor of computer science at Georgetown, author of erudite papers on distributed networks as well as popular works such as So Good They Can’t Ignore You (2012).  In Deep Work: Rules for Focused Success in a Distracted World (2016) he explored how to maintain focus to do optimal, cognitively demanding work in a world of distractions: high-value, high-impact undertakings rather than low-impact tasks.  Digital Minimalism sprang from his readers’ struggles with the role of new technologies in their personal lives.  The insight came to him while on a walk on a deserted beach in the Bahamas (Newport strongly recommends walking as an analog practice)— a lovely location for “deep work!”

Digital minimalism is “a philosophy of technology use in which you focus your online time on a small number of carefully selected and optimized activities that strongly support things you value” –without the infamous Fear of Missing Out (FOMO).  Living this philosophy successfully means engaging long-term in cost/benefit analyses: is the benefit worth the time?  Time is the most truly limited resource. Clutter is costly; optimizing your time is crucial; intentionality is satisfying: consistently following through on your commitments.

Newport unpacks all this lucidly (he is, after all, a computer scientist).  His first chapters lay the foundations: why declutter your digital life? What can you gain? How do you do it and stick to it? His latter chapters focus on practices: how to do a digital de-clutter; how to grow comfortable again with spending time alone; how to reclaim real leisure; how to avoid digital rabbit holes such as clicking “like,” and how to find other digital minimalists: community support. He seeks to answer Andrew Sullivan’s plaintive essay, “I Used to Be a Human Being” (2016): to help upend Sullivan’s lament “by providing a constructive way to engage and leverage the latest innovations to your advantage” –to be able to “say with confidence: ‘Because of technology, I’m a better human being than I ever was before.’”

Wait—isn’t this the point of an education? Newport acknowledges the depths here: Aristotle, Thoreau, Abraham Lincoln; but he avoids getting pulled off-task.  The book is a readable length, but its shadow stretches very far indeed: becoming a better human being stretches far beyond dispelling the enchantments of technology.

Back, for a moment to Sullivan:  his moment of insight came after illness, sleeplessness, the demands of a profitable media business (blog), and dwindling friendships. “Multi-tasking was a mirage. This was a zero-sum game. I either lived as a voice online or I lived human being in the world that humans had lived in since the beginning of time.”  Why zero-sum? He had (has) only so much time to pay attention. The ceaseless wind-tunnel of distraction “denies us the deep satisfaction that comes with accomplishing daily tasks well, a denial perhaps felt most acutely by those for whom such tasks are also a livelihood —and an identity.”

Many university teachers have noticed that students (especially undergraduates) now seem even less prepared to engage in serious thinking, research, writing, and lab work than a decade ago.  Their observations dovetail with major shifts in student mental health observed by counselors in the past few years, validated by Jean Twenge’s research on those born 1995-2012, who grew up with constant access to social media. “Rates of teen depression and suicide have skyrocketed since 2011. . . Much of this deterioration can be traced to their phones . . . . The effect of screen activities is unmistakable: The more time teens spend looking at screens, the more likely they are to report symptoms of depression . . . . This trend has been especially steep among girls.” Twenge’s teenage research subjects in 2015-2016 are (or will) enroll in university classes 2018-2021.

Can this be blithely dismissed: That’s progress, you can’t stop it?  “Progress” hides a more sinister reality: the social media apps these young people use so often have been specifically engineered to encourage maximal use through intermittent positive reinforcement and the drive for social approval.

  • Apple engineers Justin Santamaria and Chris Marcellino developed the iPhone push-notification technology that affects the same neurological pathways as gambling and drug use: “reward-based behavior that activates the brain’s dopamine pathways.”
  • Tristan Harris (“Design Ethicist” at Google) notes that humans crave approval, and companies tweeks their apps to hook their users with the power of unpredictable positive feedback, sprinkling “intermittent variable rewards [likes, tags, tweets, etc.] all over their products because its good for business.” Getting a reward is like winning at a slot machine, and “several billion people have a slot machine in their pocket.”
  • Sean Parker (Facebook founder) remembers, “The thought process that went into building these applications, Facebook being the first of them, ... was all about: How do we consume as much of your time and conscious attention as possible?”

The combination of phones and social media apps is specifically designed to hook users –especially young people—into prolonged use because their business model is to expose them to paid advertising, political, and entertainment content intended to shape their behavior and gather their votes and dollars.  A great many users (especially the young) are compulsively on their phones because they have been hooked –exactly what the phones were designed to do.  Sean Parker fears that social media “literally changes your relationship with society, with each other ... It probably interferes with productivity in weird ways. God only knows what it's doing to our children's brains.”  Sullivan suggests that this enslavement is merely “new antidepressants of a non-pharmaceutical variety.”

These intentions are not a new idea, but in digital engineering now taken to new extremes.  Timothy Wu writes that newspapers were drastically changed by the introduction of advertising in the 19th century: readers became not just subscribers, but also an audience the newspapers delivered to advertisers.  Matthew Crawford notes that the first industrial assembly lines, by demanding concentration on repetitious tasks, so altered the experience of work that Henry Ford’s workers simply walked out in 1913.  When Ford wanted to add 100 workers to the line, he had to hire 963, and was forced to double the daily wage to keep the line staffed.  In broader social terms, Crawford writes elsewhere that advertising through social media apps of the claim a large portion of the “attentional commons” for private purposes in the attention economy, with the resulting surfeit of messages and enervated users.  Just as Ford’s innovations in long term fomented a powerful labor union, could a “user union” come to counterbalance corporate attention engineering?

Resisting these claims to wage labor or attention engineering is not new.  The 19th-century Arts and Crafts movement inspired by John Ruskin and William Morris grew from their revulsion against mechanized production and the Dickensian, oppressive division of sweatshop labor in Victorian England.  Newport advances Thoreau’s famous axiom in Walden, “The cost of a thing is the amount of what I will call life which is required to be exchanged for it, immediately or in the long run.”  Rather than the standard account of cost in money, Thoreau counts the cost in life: attention, connection, his pleasure of living deliberately.  In the first chapter, “Economy,” Thoreau gives a very straightforward, New England accounting of his life on the pond, replete with tables (his kind of spread-sheet) to show his point that frequently more is actually less.  By contrast, do not our Concord-like students, crushed and smothered under their load of distraction and debt, come to lead lives of quiet desperation?

Is there any solution or alternative?

A hint of a solution has been given, ironically, by Facebook itself.  Its own David Ginsberg and Moira Burke ask, “is spending time on social media bad for us?” After reviewing a lot of research, they conclude, “it really comes down to how you use the technology.”  This gives the game away: reflective, intentional use (in Newport’s words) “punctures the myth of Facebook as a foundational technology that everyone should just “use” in some generic sense . . . . [they] are encouraging people to think critically about exactly what they want to get out of this service.”

Newport realizes the potential of Ginsberg’s and Burke’s admission. “This mind-set is potentially disastrous” for Facebook because it could result in far less time spent in it, dramatically decreasing its value for advertisers and investors.  Any explicit comparison of the real costs of time and attention with the real benefits of social media threatens Facebook’s business model.

Reflective, intentional, and “critical use is a critical problem for the attention economy.”  By developing minimal and deliberate use of digital technology, users might “front only the essential facts of life,” to see if they can learn what it has to teach: to choose a focused life.  Have universities, by so catering to students’ and parents’ anxieties, accepted their students’ distraction by social media unreflectively?  The “attentional commons” of higher education has always faced competition, but now faces determined competitors armed with the specific agenda to “consume as much of [their students] time and attention as possible” (Sean Parker).

Universities can reclaim their cultural relevance when they come to understand that the greatest threat to education today is not careerism, financial instability, or political hostility, but distraction.  If higher education will ever offer a coherent alternative to a depressed and frazzled generation, it will have to engage the powerful corporate and cultural forces that want to hold students, faculty, and staff hostage to engineered, hyper-palatable mental pseudo-stimuli.  This engagement will have to be smart, flexible, subtle, and persistent if we are to challenge the fast food of social media with the slow cookery of a strenuous education.

The past few months have shown that Facebook and other social media sites are hardly invincible and certainly not foundational, as they face sharp-eyed scrutiny from public, government, and investors alike.  Now is the time for higher education to step up to the challenge of distraction.

Where is this wisdom to be found, and where is the source of this understanding?

  1. Critical Attention, Disruptive Humility, and Analog Reality

Much fuller responses come from a varied handful of writers.  Gary Rogowski and Michael Crawford return to attention required by the physical world of objects, activities, and pleasures that live on despite the blandishments of the digital.  Rogowski’s title and subtitle reveals his point: Handmade: Creative Focus in an Age of Distraction.  Examining what his life has to teach, the essential facts of his life (like Thoreau), Rogowski finds a correspondence between our hands and our thoughts: “Long ago we learned to think by using our hands, not the other way around.”  The sheer physicality of “analog” asks us “to give good evidence of yourself. Do good work.”

Craftsmanship “must reckon with the infallible judgment of reality, where one’s failures or shortcomings cannot be interpreted away.”  Crawford believes “the mechanical arts have a special significance of our time because they cultivate not creativity, but the less glamorous virtue of attentiveness.  Things need fixing and tending no less than creating.”  Sheer physicality, the recalcitrance of the real, is a way of learning the humility of attention. As Rogowski puts it, “situated among our fellows in norms and practices that shape a life, the environment matters.”

The humility of attention is not new. Alexander Langlands inquired into the origins and true meanings of traditional crafts, he bumped into the embeddedness of craeft, an Old English word connoting an amalgam of knowledge, power, and skill, extended to a sense of wisdom and resourcefulness. “We can’t put our finger on exactly what craeft was.”  As an archaeologist, he constantly studies material culture, the “deep time signatures” of so many traditional crafts.  They remind him, “we are makers, and that we have always lived in a world of making.”  The cognitive contemplation of making exercise the mind in silence and solitude.  Craeft is a form of intelligence, an ingenuity through which we can think, contemplate, and be: powerful, resourceful, and knowledgeable through the medium of making in a world of diminishing resources and increasing environmental instability.  To be craefte is all about a mindful life achieved through beautiful simplicity: the humility of attention, failure, and persistence.

Craeft provides a high-touch physical higher education.  It pays attention to the reality of physical, tangible materials as they both shape and respond to human desires, needs, and intentions.  Langlands’ “deep time signatures” are written with the ink of character on the paper of experience, an in-this-moment contact with forms of past life and future practice that transcend any individual’s encounters.  In the 21st century, is this mere nostalgia, or wishful thinking?

Jaron Lanier, one of the primary inventors of “virtual reality” practices these deep signatures of time through his music and writing.  Understanding the virtual imitation of reality so deeply, Lanier wrestles with actual reality acutely: difficult-to-play ancient instruments such as the begena (ancient harp), suling (flute), or esraj (sitar)—he has collected over a thousand.  He doubles down on being human by re-learning the ancient crafts of playing, the deep signatures of time that learning such ancient skills requires.

Lanier’s craeft of music extends to his craeft of writing, and understanding of books.  His Who Owns The Future (2013) concludes with entwined meditations, “The Fate of Books,” and “What Is To Be Remembered?”  Having watched the disruption of the music business and unintentional (but devastating) impoverishment of musicians, Lanier see a similar pattern dveloping regarding reading and writers.  Six years later, his hopes, fears, and expectations for books are fascinating: some have already come true.  Among them:

  • Readers will be second-class citizens (because ownership of a printed copy has become a contract of access to digital content; the reader is left no capital, nothing to resell –a rejection of a true market economy);
  • Many books will be available only via a particular device, such as a particular company’s tablet (think: Kindle, Nook, or digital rights protection such as Adobe Digital Editions);
  • Readers will spend a lot of time hassling with forgotten passwords, expired credit cards, or being locked into the wrong device . . . for years at a time;
  • People will pay less to read . . . while people will earn still less from writing (repeating a pattern seen in music and other media in which “software swallows everything”);
  • By the time books have mostly gone digital, the owners of the top Internet servers . . . will be more powerful and richer than they were before.

What does Lanier see about a book that is worth saving?  A book is not an artifact, but a synthesis of fully realized individual personhood with human continuity (my italics).  To Lanier, the pattern of devaluation and exploitation of “content” (music, literature) feels off-kilter and short-sighted. The network that fails to recognize and preserve human continuity will serve only its masters, just as digital music has fostered a dehumanization of music.  When the human role is reduced to producing “output” (or “content”) just like a collective or algorithm, and when a winner-takes-all market-place ideology becomes the sole means of valuation, the very qualities that make music or literature, reading or writing, humanely worthwhile are simply sidelined, as irrelevant externalities.  To paraphrase the famous declaration from Vietnam, “we had to burn this art form to price it.”

Lanier proposes this (potentially offensive) thought experiment: “Would you want to send a collectively programmed robot to have sex on your behalf because it was better than you, or would you want to have the sex yourself and get better by doing?” Too often Lanier has heard in response, “I’d prefer to have the best available robot,” silently passing over the nexus of data, servers, technological finance, and market ideology that make that choice seem preferable. By analogy, “If market pricing is the only legitimate test of quality, why are we still bothering with proving theorems?” Why don’t we just let the market determine whether a theorem is true?”  Content-by-algorithm or -collective is slow suicide for creators and researchers and slow homicide for everyone else.  Lanier concludes, “I am arguing that there is more than one way to build an information economy, and we’ve chosen the self-destructive option.”

“Content” at market price seems to be so easily available.  For example, printed books are still common.  Hyper-focused on “content-creation” and marketplace valuation, the “siren servers” of technology obscure how each book bears the marks of the deep signatures of time.  A modern book is a carefully crafted, progressively evolving artifact that only implies the depth of human continuity and experience in its design.  If you doubt this, just look at books printed a century or two ago to see the differences in design and the evolution of writers’ and readers’ expectations.  The book is a synthesis of fully realized personhood with the human continuity signed with time’s deep signatures. A printed book simply transposed into a digital reader as some kind of file bring convenience at the cost of interaction (annotating, bookmarking, easy page-to-page comparison or reference –and the present alternatives to those activities in software are simply dreadful).  A time may come when a digital book represents human continuity –but not yet nor for a long while.  We are creatures of touch and smell as well as sight, hearing, and taste.

I believe Lanier’s insight (that books and other media are syntheses “of fully realized individual personhood with human continuity”) foregrounds the odd and enlightening “revenge of the analog” (David Sax’s book). The return of “disrupted” technologies (prominently but not only vinyl recordings, paper books, film photography, and board games) is possible not only because digital versions work so well, but because they work too well.  Convenience becomes a trap, a mirage of experience rather than genuine experience. Sax’s different narrative shows that technological innovation is not an inevitable and irreversible march, but “a series of trials that helps us understand who we are and how we operate.”  The overwhelming superiority of digital media ironically leads to a shift of valuing: an older technology can sometimes work better, and its inconvenience and inefficiency becomes its renewed strength.  Print publishing, retail sales, on-site workers in a vibrant community (such as Shinola in Detroit), on-ground education, face-to-face interaction in meetups, book clubs, meditation groups, summer camps—all of these ideas and practices have proven surprisingly resilient.  Resilience cannot charm away real difficulties: no analog Dr. Pangloss!  But it does suggest that a popular narrative of irresistible disruption might not be all there is; resilience is an alternative to battle-field metaphors of victory and defeat.

The worship of disruptive technologies (and disruption in general) can go too far. “Disruptive innovation” may not in fact be the rule, but the exception.  Clayton Christiansen’s celebrated theory is not beyond criticism (by his Harvard colleague Jill Lepore) and equally cogent alternatives (Harvard colleague Michael Porter’s theories of competitive strategy and advantage).  Very recently Alfred University president Mark Zupan called Christiansen’s bluff.  Christiansen has doubled down on his 2011 prediction that as many as half of American universities would close or go bankrupt in 10 to 15 years.  Zupan has wagered $1 million, to be given to Christiansen’s non-profit Institute if at least half of all traditional universities fail or merge by 2030; if not, Christiansen would contribute the same sum to Alfred University’s endowment.  Will Christiansen be willing to put his money where his mouth is?  (See also this previous blog entry.)

The humility of paying careful attention to deeply fallible and human face-to-face encounters and processes calls the bluff of technologism –the belief that (often disruptive) technology will cure all woes, a new variety of traditional snake-oil wonders (climate change: “there’s an app for that!”).  Ryan Raffaelli (also from Harvard Business School) has studied how new value can be created for “old innovations” –a subtle and delightful riposte.  Raffaelli has studied, in particular, how the Swiss watch industry saved itself by reinventing its identity: a technology re-emergence.  Swiss watchmakers who survived the competition of cheaper imports saw prices for mechanical watches up for auction increase dramatically.  Suddenly they realized: there could still be value latent in an older technology re-conceived as a social and cultural fashion signal.  Just when the Apple watch might have swept away the Swiss watch industry, instead the watch industry revived as a progression for those who, having worn an Apple watch, now wanted something up-market that feels and signals “genuine.”  As went Swiss watches, so have fountain pens (Goulet Pens), notebooks (Moleskine), and independent bookstores, now numbering more than in 2012.

Here—at last!—is an opening for liberal arts education.

  1. Analog Liberal Arts Education Against the Grain: Digital Minimalism in Action

The term “liberal arts” now seems so old-fashioned or vague that it has been seen to impede further conversation.  I once spoke with the president of a well-regarded liberal arts college who claimed that she no longer tried to use the term, because no one knows what it means.  Her view discounts the real value of the phrase “liberal arts education,” ambiguous and elusive, which has thus resisted facile, ideological characterization. The ambiguity is the impediment that re-conceives value.  The word “liberal” in this case means something far older than the tired liberal-versus-conservative polarization that disfigures (or has destroyed) public discourse: liberal as in liberating, free-making, getting disentangled from the sloppy shortcuts of everyday thinking.  “Liberal arts” connotes a narrative of thoughtful practices out of synch with careerism and digital maximalism.

The narrative of the liberal arts, a braid of habits and practices from (inter alia) classical antiquity, medieval universities, American colleges in the New National period, and the rise of sciences and social sciences in Progressive era, is an amalgam that is unavoidably counter-cultural. Powerful ideologies, whether arising from crown, state, church, market, or faction have always arrayed themselves against it.  Overt ideologies are the obvious culprits, but equally antagonistic have been the subtle habits and dispositions that can cloak the subtle, slippery demands of power, position, and fortune.  The “liberal arts” have been both the servants of power and privilege and a primary source of their interrogation. This both/and contradiction or ambiguity, with a strong whiff of the impractical, means that contemporary marketplace ideology can hardly avoid patronizing liberal arts higher education as outmoded, expensive, boring, elitist, wasteful, and ideologically suspect.  That suspicion shows exactly why liberal arts higher education meets a critical need for pause and reflection in an ideologically riven time.  Being a thoughtful, humane person has never been easy.

What are the liberal arts?  Many answers have been proposed that, following William Cronon’s insight, come down to lists: essential facts, mandatory readings, curricular requirements, etc.  The original artes liberales were a medieval list of seven subjects that set male aristocrats off from everyone else.  Corresponding modern lists of competencies are scarecly inspiration and carry the odor of the curricular committee: competencies in commmunication, major modes of thought (science, mathematics, arts, social sciences, etc.), cultural heritages, etc.  Nevertheless, Cronon cannot resist making another list, ten personal qualities which he associates with people that seem to embody the values of a liberal education (here only several):

  1. They listen and they hear;
  2. They read and they understand;
  3. They can talk with anyone;
  4. They can write clearly, persuasively, and movingly . . .
  1. They practice humility, tolerance, and self-criticism. . .
  1. They follow E.M. Forster’s injunction from Howards End: “Only connect . . .”

(The ones left out here are necessary as well: read Cronon’s short essay)

All of these personal qualities are well associated with the analog, “real” world –contra the critics who see “reality” only as an excuse for performative hard-headedness.  Cronon:

A liberal education is not something any of us achieve; it is not a state.  Rather it is a way of living in the face of our own ignorance, a way of groping toward wisdom in full recognition of our own folly, a way of educating ourselves without any illusion that our educations will ever be complete.

Cronon then emphasizes that each of these qualities is also education for human community:

Each of the qualities I have described is a craft or a skill or a way of being in the world that frees us to act with greater knowledge or power. . . . [W]e need to confront one further paradox about liberal education. In the act of making us free, it also binds us to the communities that gave us our freedom in the first place; it makes us responsible to those communities in ways that limit our freedom. In the end, it turns out that liberty is not about thinking or saying or doing whatever we want. It is about exercising our freedom in such a way as to make a difference in the world and make a difference for more than just ourselves.

Cronon’s “craft, skill, or way of being in the world” returns to craeft: a way of living that bears the deep signatures of time, that nurtures the craeft of attention, humility, and human continuity.  This is the real point of a liberal arts college education –that it cannot be achieved or stated in four years, but is a life-long way of living in the face of our own knowledge and ignorance, wisdom and folly, and inevitably incomplete connection.  Only connect: human freedom of the service of human community, human continuity.

Matthew Crawford proposes a fruitful metaphor for connection: the cultural jig.  The metaphor’s origins lie in carpentry: “a jig is a device or procedure that guides a repeated action by constraining the environment in such a way as to make the action go smoothly, the same each time, without his having to think about it.” For example, a jig can guide a saw, so that a carpenter need not measure many boards individually.  A cultural jig can contribute to personal character “that is built through habit, becoming a reliable pattern of responses to a variety of situations:” such as (historically) thrift, parental authority, personal accountability.  Tradition can be a robust cultural jig, fostering a community of practice in which real independence through interdependence does seem to become possible (though never guaranteed).  The conundrum is that real freedom and self-mastery require some dependence on and mastery of cultural jigs, such as attentiveness.  When a liberal arts education is effective, a student (or graduate) is never adverse to seeking, genuine cultural jigs to guard against folly, foibles, and simple ignorance.

This line of thinking can get very woolly very fast (and maybe already has).  It can have real-world consequences, such as careful thinking about what makes work meaningful.  British business researchers Catherine Bailey and Adrian Madden have done research to discover what does, in fact, make work meaningful, and their conclusions bear striking similarity to the consequences of a liberal arts education.

How and why do people find their work meaningful? When:

  • work matters to more people than just to the workers themselves, or their bosses;
  • when it can mean engagement with mixed, uncomfortable, or even painful thoughts and feelings, not just euphoria or happiness, and a sense of coping with sometimes intractable challenges;
  • when meaningfulness in work is allowed to emerge in an episodic rather than scripted, sustained way, in moments that are not forced or managed, but “contain high levels of emotion and personal relevance, and thus become redolent of the symbolic meaningfulness of work;”
  • when workers have the time and space to become reflective, as meaningfulness can be realized over time rather than spontaneously or momentarily;
  • when the feeling of engagement is exactly a feeling or personal contribution in an organizational environment of tasks and responsibilities, formed in a narrative of engagement and satisfaction.

By contrast work becomes meaningless when connections are broken: people from their values; leaders from subordinates; pointless tasks without connection to any real problem; over-riding people’s better judgment; disconnection from personal relationships; exposure to personal harm (disconnection from safety and well-managed risk).  Organizations can cultivate meaningful work by stressing personal connections, well-delegated responsibilities, real-world problems, real-world beneficiaries, and recognized accomplishments.

Such meaningful work can become the occasion of the personal attributes that William Cronon identified in liberally educated individuals (above).  These can well be extended in context of work to build the practice of forgiveness, as Rogowski illumines it: “Screwing up is a given. Forgiveness is not. Unless you practice it.”  A liberal arts education is an ideal way to learn how to fail, since it is a given that one will fail, but also how to recover and go on.  An education that teaches how to fail, however, is neither what a college admissions officer wants to represent, nor what the anxious parents of a high-school senior want to hear.

Only connect: human, working freedom in the service of a good greater than mere organizational continuity, a craeft that conserves and strengthens human continuity.  The imperative to connect is, in a vital sense, a powerful cultural jig.

But can you make a living doing this?  The predominant narrative in American society now is that a liberal arts education is a frivolous waste or élite privilege that prepares a young person poorly for the “real world” (always taken to mean: the grimly hyper-competitive world that the writer envisions for everyone else).  It a liberal arts education of little value in “today’s world?”

A recent Mellon Foundation study answers: no: a liberal arts education incurs costs and permits benefits on par with pre-professional degrees.  This is a carefully nuanced study that looks carefully at what constitutes a liberal arts education, and that mere attendance at a liberal arts college is not always a good proxy.  It tries to control for factors such as parental income and achievement, college selectivity, typical incomes in various professions, and other factors.  It warns that observed correlations do not prove causations, since in any case no real control group is available to perform a meaningful (or even ethical) experiment.  Nevertheless, “claims that liberal education is of little value because it does not lead to employment is clearly not supported by the existing data.”  At the same time: more work needs to be done.

All this seems a long way from digital minimalism –or is it? I am convinced that one of the best ways to counter simplistic digital maximalism –a vague giving “people the power to build community and bring the world closer together” (Facebook) is exactly the craeft of connection that bears the deep signatures of time in human continuities in a variety of media.

A liberal arts education is analog in a digital world: intentionally inconvenient as a strategy for identifying enduring value.  The power of a “digital cleanse” (Newport) is vastly increased when a liberal arts education can provide genuine alternatives and powerful cultural jigs: engagement with real people, over-arching, important questions, and intractable problems.  Screwing up is a given. Forgiveness is not. Practicing craeft is a jig of attention, humility, forgiveness, failing and moving beyond failure, human continuity in a real community.  Only connect.

Resources linked or mentioned in this blog entry:

Allen, Mike. “Sean Parker unloads on Facebook, ‘God only knows what it’s doing to our children’s brains,’” Axios, November 9, 2017

Bailey, Catherine, and Adrian Madden. “What Makes Work Meaningful—Or Meaningless.” MIT Sloan Management Review, Summer 2016.

Christiansen, Clayton M., Michael E. Raynor, and Rory McDonald.  “What is Disruptive Innovation,” Harvard Business Review, December 2015.

Crawford, Matthew B.  Shop Class as Soulcraft: An Inquiry Into the Value of Work, Penguin Books, 2009,  especially pages 41-42.

Crawford, Matthew B.  The World Beyond Your Head: On Becoming an Individual in an Age of Distraction,  Farrer, Straus and Giroux, 2015, especially pages 8-20.

Cronon, William.  “’Only Connect’ . . . The Goals of a Liberal Education,” The American Scholar 67, Autumn 1998 (full-text available on his website)

Ginsberg, David, and Moira Burke. “Hard Questions: Is Spending Time of Social Media Bad for Us?” Facebook Newsroom, December 15, 2017

Harris, Tristan. “How Technology is Hijacking Your Mind—from a Magician and Google Design Ethicist,” Medium: Thrive Global, May 18, 2016

Hill, Catherine B., and Elizabeth Davidson Pisacreta.  The Economic Benefits and Costs of a Liberal Arts Education.  An essay commission by The Andrew W. Mellon Foundation under the auspices of the Mellon Research Forum on the Value of Liberal Arts Education.  The Foundation, 2019.

Langlands, Alexander.  Craeft: an Inquiry into the Origins and True Meaning of Traditional Crafts. New York: Norton, 2018.

Lanier, Jaron.  Who Owns the Future? New York: Simon and Schuster, 2013.

Lederman, Doug. “Clay Christensen, Doubling Down,: Inside Higher Ed, April 28, 2017.

Lepore, Jill. “The Disruption Machine: What The Gospel of Innovation Gets Wrong,” New Yorker, June 23, 2014.

Lewis, Paul. “’Our Minds Can Be Hijacked’: the Tech Insiders Who Fear a Smartphone Dystopia,” The Guardian October 6, 2017

Porter, Michael. “The Five Competitive Forces That Shape Strategy.” Harvard Business Review, January 2008.

Raffaelli, Ryan and Carmen Nobel.  “How Independent Bookstores of Thrived in Spite of Amazon.com”  Harvard Business School, Working Knowledge, November 20, 2017.

Raffaelli, Ryan.  Technology Reemergence: Creating New Markets for Old Technologies, Swiss Mechanical Watchmaking 1970-2008.  Administrative Science Quarterly, May 2018.

Rogowski, Gary. Handmade: Creative Focus in the Age of Distraction.  Fresno, California: Linden Publishing, 2017.

Sax, David, The Revenge of the Analog: Real Things and Why They Matter.  New York: Public Affairs, 2016.

Sullivan, Andrew.  "I Used to Be a Human Being."  New York, September 2016

Thoreau, Henry David. Walden: A Fully Annotated Edition, edited by Jeffrey S. Cramer, New Haven: Yale University Press, 2004, especially the first chapter, “Economy”

Twenge, Jean M. “Have Smartphones Destroyed a Generation?” The Atlantic, September 2017.

Wu, Tim. The Attention Merchants: The Epic Scramble to Get Inside our Heads, New York Vintage Books, 2017, especially pages 11-17.

Zupan, Mark.  “Betting on (Non-Profit) Higher Education,” Rochester Democrat and Chronicle, February 28, 2019.

Hartley argues that liberal arts educations widen a student's horizon, inquire about human behavior and find opportunities for products and services that will meet human needs. The "softer" subjects helps persons to determine which problem they're trying to solve in the first place.

FuzzyAndTheTechieJacketCoverThe Fuzzy and the Techie: Why the Liberal Arts Will Rule the Digital World, by Scott Hartley.  New York: Houghton Mifflin Harcourt, 2017. ISBN 978-0544-944770 $28.00 List.

Hartley writes that a "false dichotomy" divides computer sciences and the humanities, and extends this argue to STEM curricula as well. For example, Vinod Khosla of Sun Microsystems has claimed that "little of the material taught in liberal arts programs today is relevant to the future." Hartley believes that such a mind-set is wrong, for several reasons. Such a belief encourages students to pursue learning only in vocational terms: preparing for a job. STEM field require intense specialization, but some barrier to coding (for example) are dropping with web services or communities such as GitHub and Stack Overflow. Beyond narrow vocational boundaries, Hartley argues that liberal arts educations widen a student's horizon, inquire about human behavior and find opportunities for products and services that will meet human needs. The "softer" subjects helps persons to determine which problem they're trying to solve in the first place.

That said, the book does not move much further. Hartley never really tries to provide a working definition for true "liberal arts" education except to distinguish it STEM or Computer Science. By using the vocabulary of "fuzzy" and "techie" he encountered at Stanford, he inadvertently extends a mentality that has fostered start-ups notably acknowledged to be unfriendly to women. So far as I could determine, a mere handful of Hartley's sources as noted were published elsewhere than digitally--although the "liberal arts," however defined, have a very long tradition of inquiry and literature that Hartley passes by almost breezily, and is very little in evidence. His book is essentially a series of stories of companies and their founders, many of whom did not earn "techie" degrees.

Mark Zuckerberg's famous motto "move fast and break things" utterly discounted the social and cultural values of what might get broken. Partly in consequence, the previously admired prodigies of Silicon Valley start-ups are facing intense social scrutiny in 2017 in part as a result of their ignorance of human fallibility and conflict.
Hartley is on to a real problem, but he needs to do much more homework to see how firmly rooted the false dichotomy between sciences and humanities is rooted in American (and world-wide) culture. The tendency, for example, to regard undergraduate majors as job preparation rather than as disciplined thinking, focused interest and curiosity is so widespread that even Barack Obama displayed it. ("Folks can make a lot more, potentially, with skilled manufacturing or the trades than they might with an art history degree" --Barack Obama's remark in Wisconsin in 2014; he did retract it later).

Genuine discussion of the values of humanities and STEM degrees can only take place with the disciplined thinking, awareness of traditions, and respect for diversity that are hallmarks of a true liberal arts education.

Wolf proposes and defends a “fitting fulfillment” view, that what can truly give meaning to one's life is an activity that one feels answers a deep internal need for engagement and has a certain kind of objective worth to others. Why does it matter? --because meaning can give shape and direction to one’s life that transcends simple self-interest or universal, objective good. Wolf proposes that such a life can have objective value, but also recognizes a need for modesty (“Who’s to say? The elites?”).

Meaning In Life cover imageMeaning in Life, and Why It Matters, by Susan Wolf.  With Commentary by John Koethe, Robert M. Adams, Nomy Arpaly, and Jonathan Haidt.  Princeton University Press, 2010.  143 pages.  ISBN 978-0-691-14524-2.  Sacred Heart University Library BD431.W77 2010

A colleague who noticed Meaning in Life on my desk asked, “Really? Is there any?”  Influenced by this book, I responded, “Yes there can be, but you have to think carefully.”  Susan Wolf carefully formulates an enlightening and at least partially persuasive case that yes, there can be meaning in life, and if you feel your life has none, you should reflect on that.  This long review takes a careful look at her thinking, because her questions cut to the core of a liberal arts education (see the closing remarks, below).
 
The book is a kind of symposium.  Wolf’s two essays (“Meaning in Life,” and “Why It Matters”) are followed by responses from four distinguished scholars: John Koethe, Robert M. Adams, Nomy Arpaly (philosophy), and Jonathan Haidt (psychology), to whom Wolf then replies.  This give the entire work a texture, range, and collective impact beyond any one or two essays.
 
Wolf starts by righty questioning the two predominant models that we often have in the background when we evaluate our actions (or the actions of others): the egoistic perspective (it’s in my self-interest) or impersonal perspective, “from the point of view of the universe.”  But there are situations where either model or both are unconvincing, and as models of motivation and practical reason they seem to leave out a lot of the many motives and reasons that shape our lives.  These could be “reasons of love” or “reasons of pleasure” that will be distorted if pressed into an iron grid of self-interest or universal disinterestedness.  A proper reason of love will be directed towards a worthy object of love, and when the idea of meaningfulness is introduced, can give reasons for finding meaning beyond duty or love.
 
“Love” as “being gripped by” or actively engaging with a valuable object, to promote and protect it, is an apt example of Aristotle’s endoxic method, agreed-upon “things which are accepted by everyone, or by most people or by the wise.”  But what are those things? 
 
One prominent “fulfillment” view (popularized by Steve Jobs, whom Wolf does not mention) is that it does not matter what you do with your life as long as it is something you love: “Follow your passion,” figure out what turns you on, and go for it. (p. 10)
 
A second view says that a truly satisfying life involves something that is “larger than oneself,” a two-fold view of something independent of oneself and has its source outside of oneself.  Wolf uses excellent examples: Sisyphus has a meaningless existence objectively even if he magically believes that he is fulfilled by eternally rolling a rock uphill.  Spending a life smoking pot, conceivably an independent good with its source outside of oneself, does not contribute any benefits to anyone else.
 
Wolf proposes and defends a “fitting fulfillment” view, that what can truly give meaning to one's life is an activity that one feels answers a deep internal need for engagement and has a certain kind of objective worth to others.  The feeling of being occupied with something of independent value, that takes one out of oneself, is vital to our social natures and a certain human tendency to try to see oneself of an external point of view. (Nagel’s “view from nowhere” or Gods-eye point of view). This subjective desire is balanced with a sense of objective value: meaningfulness is a matter of active and loving engagement in projects of worth.
 
Why does it matter?  --because meaning can give shape and direction to one’s life that transcends simple self-interest or universal, objective good.  Wolf proposes that such a life can have objective value, but also recognizes a need for modesty (“Who’s to say? The elites?”) and she acknowledges that great care and reserve must be taken when assessing aesthetic, idealistic, and essentially private projects.  Weeding a garden might be very meaningful to a dedicated gardener but meaningless to someone who finds it a chore. 
 
Wolf returns to her insight that “much of what we do is not obviously justified by either morality or self-interest.”  She names various activities that by those criteria only would appear irrational or mistaken.  She then rejoins, “Yet to regard them as morally valuable, much less as morally better than alternatives, is to puff them up in a way that seems both pompous and hard to sustain.” (p. 50)   By comparing and contrasting meaning with self-interest and morality, Wolf sustains “reasons of love,” but with great modesty, and recognizes that her argument needs an idea of objective value.  “One can find the question, What has objective value? intelligible and important while remaining properly humble about one’s limited ability to discover the answer and properly cautious about the uses to which one’s partial and tentative answer may be put.” (p. 63)
 
The subsequent four comments focus upon particular questions, applications, and examples.  Koethe asks whether artists can have meaningful lives even when they are almost unknown, possibly delusional, or troubled by disturbing psychological compulsions.  “It is difficult to distinguish (from the viewpoint of an artist) between successful achievement of serious aesthetic aims and the delusion that one has them, and they’ve been achieved.” (p. 71) Is that life meaningful or wasted?  If it is art, is it significant?  “Even if (delusion or self-deception) jeopardizes my ability to derive satisfaction and comfort from a life based on aesthetic commitments . . . it is simply a predicament I have to live with.” (p. 73)
 
Adams raises the complex and morally important example of Claus von Stauffenberg’s project of rescuing Germany from Nazism, culminating in his attempt to assassinate Hitler and lead a coup d’état in July, 1944.  He failed, but “did Stauffenberg himself, in the end, find his life meaningful because of his project, despite its failure?”  It was certainly an objective value independent of himself, recognized by others,
but it did lead to feelings of fulfillment?  He was described on that evening as looking “indescribably sad.”  Stauffenberg recognized the moral ambiguity of patriotic love, which has inspired both admirable achievements and enormous wrongs and follies. (p. 82) He could see a path that held at least a slight hope of leading to a better future.  Those who shared his insights (and, in some measure, the plot) also recognized meaningfulness even in defeat, most famously Dietrich Bonhoeffer, peripherally involved but nonetheless convicted and executed (my example, not Adams’).  On the day of his death, Bonhoeffer was reported to have said, “This is the end –for me the beginning of life,” not simply a proclamation of Christian faith, but an assessment of his own life at the moment he faced torture and death.  It is important to recognize “a very important kind of positive meaningfulness in life that responds to objective goods with motives of love that are not impartially moral motives.”

 
Nomy Arpaly questions the necessary role Wolf claims objective worth has in providing meaning in life.  What appears as worthless (a “goldfish nut” wholly devoted to her goldfish) may be more circumstantial: a mentally disabled person may find great meaning in such activity. A plethora of values complicates the case, there is no “top” moral value among values. Is there a truth about which love to value when loves conflict?
 
Jonathan Haidt asks whether the ideas of vital engagement and hive psychology can help solve the problem of objective meaning.  Mihaly Csikszentmihalyi has written about vital engagement as “a relationship to the world that is characterized both by experiences of flow (enjoyed absorption) and by meaning (subjective significance.”  The quality of the connection is vital; but does Wolf really need a theory of objective value?  Hive psychology suggests that social sciences (and philosophy) have been plagued by methodological individualism, but if the fundamental unit of psychology is not the individual but the group, then perhaps the modern, independent sense of self is an anomaly.  As Enlightenment bees we busted out of the hive and burned it down, and the great challenge of modernity is to find hives for ourselves. “We can co-create or join into, something larger than ourselves.”  The challenge we face is to choose the right kind of hive, and why it matters that we choose rightly. (pp. 100-101)
 
Wolf’s response is nuanced.  She maintains an interest in objective meaning, but recognizes its difficulties, and that no one is thereby authorized to assess it decisively on behalf either of oneself or others.  Haidt’s response suggests that there may be a reconciliation of objective value and subjective interest in the larger structures and sets of activities that a meaningful life can create (examples: sports, games, the arts).  Value emerges from from the interests and commitments of people who share such activities, and this recognizes a continuum of value upon which a sense of a meaningful life may lie.  Through concepts of objective value and subjective fulfillment we can come to understand some of our longings and sources of satisfaction, and properly assess some of our moral and evaluative intuitions, ask questions, and form hypotheses.  These concepts allow us to move closer to examining what kinds of projects and what kinds of lives are (or can be) meaningful.
 
Christopher Eisgruber, the President of Princeton University, asked each incoming first year student (class of 2018) to read Wolf’s book in the summer of 2014.  He then discussed this “pre-read” with students in Princeton’s residential colleges through the subsequent year.  He chose it for two reasons, because “it is a superb example of engaged, ethical writing, and I hope that it will introduce the freshmen to the kinds of scholarship they will encounter at Princeton,” and because “a key point in Wolf's argument pertains to the objectivity of value and why it matters; that question is important, and it inspires lively argument among undergraduates.”
 
The question of meaning in life, and why it matters, is at the heart of a liberal arts education, whether at Princeton or elsewhere.  It would be interesting to converse with Wolf in the context of the “Catholic Intellectual Tradition” examined and queried at Sacred Heart University, because there are important points of convergence and divergence.  Certainly, that tradition is concerned with assessing the whole of life, and might also be in need of Wolf’s salutary reminder to reflect modestly when assessing what might make another’s life meaningful (or wasted), and prudent caution about the uses to which one’s partial or tentative answer might be put.   Theories of objective value can be linked superficially with theological claims of revealed value, a confusion that does neither intellectual tradition any good. Wolf’s question is important from any Christian point of view, whether Catholic or other, and this book is a remarkable conversation both in its scope and its lack of pretension.  Her book engages questions that many undergraduates seem inclined to avoid, but could nevertheless frame the engaged insight that is the point of the liberal arts.

 

 

Two recent discussions --in very different venues-- take an interesting look at the role of reading, individual knowledge, and disciplined reflection in the Internet Age.

Two recent discussions --in very different venues-- take an interesting look at the role of reading, individual knowledge, and disciplined reflection in the Internet Age.

The first author is Larry Sanger, one of the founders of Wikipedia, who has gone on to found a renowed public-interest wiki Citizendium.org and the directory of educational videos online, WatchKnow.org.  With a Ph.D. in Philosophy (Theory of Knowledge), Sanger is hardly one to down-play the role of the Internet in civil society, or to be accused of being a Luddite by the rhetorically inclined.

Sanger's article in Educause Review, Individual Knowledge in the Internet Age, can be found here (in .pdf here)

Sanger discusses in some detail the importance of individual knowledge, rooted in (but not exclusively):

  • memorization --how can you really know something that you don't remember?
  • individual learning (as differentiated from social knowledge learned in groups); and
  • books --complex, deep strands of thinking that require absorption and uninterrupted attention.

The second author is Michael Hyatt, CEO of Thomas Nelson Publishers, a long-standing, mainstream Christian publishing empire famous for devotional literature and Bibles.  Far from special pleading from a print publisher, Thomas Nelson is in fact a leader in electronic publishing, and Hyatt has led the transformation.

Hyatt writes in defense of books, of the activity of reading as a way of viewing the world --you can read his blog-post here.

Beyond (or because of) his broad and deep commitment to digital publishing, Hyatt values serious readers' "ability to follow extended arguments and enroll their imagination in
the reading experience."  What Hyatt regards in peril is the ability to engage in extended conversation with its potential for transformative exchange, replaced instead by a media-driven amusement that  "will become the ultimate value against which everything else is measured."

Libraries, like Universities (and especially University Libraries!) have a cultural agenda: that the examined life is definitely worth living, that such examination requires reflection, conversation, and an openness to the experiences of people very different from contemporaries --people of the past.  Amusement, group knowledge socially constructed in networks heedless of group and individual memory --these things are no replacement for the examined life.  In fact, Plato might suggest that they are merely the shadows upon the wall of the cave in which most people live their lives, unaware of the light and the source of the light outside the cave.

Do I cavill with a straw man?  I hope so, but fear that I do not do so.  As contemporary Americans we pride ourselves on a world-wide cultural now built especially upon science, medicine, and technology --but we also prefer the disconnected amusement the some even disparage the "old" knowledge based upon remembering, individual reflection, and reading. 

Sanger and Hyatt, from very different perspectives and social and business locations, converge on similar points.  That convergence is worth pondering.