Review and recommendation: Timothy Snyder’s On Tyranny: Twenty Lessons from the Twentieth Century.

image from libapps.s3.amazonaws.comOn Tyranny: Twenty Lessons from the Twentieth Century, by Timothy Snyder.  Duggan Books (Crown), 2017. 126 p. ISBN 978-0804190114. List price $8.99

Yale University professor Timothy Snyder has spent a long time learning the languages, reading the documents, exploring the archives, and listening to witnesses of the totalitarian tyrannies of Europe in the last century –particularly of Nazi Germany and the Stalinist Soviet Union. His scholarship bore particular fruit in books such as Bloodlands: Europe between Hitler and Stalin, and Black Earth: the Holocaust as History and Warning. He came to recognize that certain characteristics in the development of those tyrannies are present in the world today, and in the United States. This book is no partisan screed: Snyder recognizes in the 45th President features he knows from other contexts; those other contexts underscore the drift towards totalitarianism apparent from Russian to Europe to the USA. On Tyranny is not only about an American moment, but about a worldwide one.

This short book consists of a brief introduction, twenty short chapters, and an epilogue. Each chapter directs an action, such as no. 12, "Make eye contact and small talk" followed by a historical example, or expansion of the point. All the actions can be undertaken or performed in daily life; there is no grand theory here.

In place a grand theory, there is a fundamental point: respect and value facts, truth, and accurate usage of our common language. In Moment (magazine), he explained: "Once you say that there isn’t truth and you try to undermine the people whose job it is to tell the truth, such as journalists, you make democracy impossible." He told Bill Maher (at 2:02) than while "post-fact" postmodernism might connote "Berkeley, baguettes, and France and nice things," it more likely means that "every day doesn't matter; details don't matter; facts don't matter; all that matters is the message, the leader, the myth, the totality" –a condition of Europe in the 1920s.  Such disdain for the truth goes hand-in-hand with conspiracy theories that put assign blame to a group associated with undermining the purity of the majority. "Rather than facing up to the fact that life is hard and that globalization presents challenges, you name and blame people and groups who you say are at fault."  Jews, Mexicans, Muslims, Rohingya, Tutsis, Hutus, globalists, evolutionists, or any other "outsider."  The myth: "Make [fill in the blank] great again."

A librarian or research might particularly resonate with Snyder's directions, "Be kind to our language," "Believe in truth," and "Investigate" (lessons 9-11). This is all a way to prepare to "be calm when the unthinkable arrives" (lesson 18) –when a leader exploits a catastrophic event to urge follows to trade freedom for security, and suspends the rule of law. The Chief Executive may or may not be attempt to stage a coup; that American democracy survived the dark moment after the Charlottesville.  Snyder told Salon in August, "We are hanging by our teeth to the rule of law. That was my judgment at the beginning of his presidency and it is still my judgment now. The rule of law is what gives us a chance to rebuild the system after this is all done."

Whether or not current politics result in tyranny and oppression is still (at this writing) an open question. The importance of Snyder's book is that it points beyond this moment to the wider trends and challenges of a world which is global (like it or not), connected (like it or not), and interdependent on both our natural climates and accrued, hard-won cultural heritages. A University founded on "a rigorous and interdisciplinary search for truth and wisdom" that "forms the cornerstone of all University life and welcomes people from all faiths and cultures" cannot leave our students unprepared. In order to make history, young Americans will have to know some (p. 126)  Will that be the twenty-first lesson on tyranny from the twenty-first century?

–Gavin Ferriby

Fuzzies, Techies, and Insufficient Thinking

FuzzyAndTheTechieJacketCoverThe Fuzzy and the Techie: Why the Liberal Arts Will Rule the Digital World, by Scott Hartley.  New York: Houghton Mifflin Harcourt, 2017. ISBN 978-0544-944770 $28.00 List.

Hartley writes that a "false dichotomy" divides computer sciences and the humanities, and extends this argue to STEM curricula as well. For example, Vinod Khosla of Sun Microsystems has claimed that "little of the material taught in liberal arts programs today is relevant to the future." Hartley believes that such a mind-set is wrong, for several reasons. Such a belief encourages students to pursue learning only in vocational terms: preparing for a job. STEM field require intense specialization, but some barrier to coding (for example) are dropping with web services or communities such as GitHub and Stack Overflow. Beyond narrow vocational boundaries, Hartley argues that liberal arts educations widen a student's horizon, inquire about human behavior and find opportunities for products and services that will meet human needs. The "softer" subjects helps persons to determine which problem they're trying to solve in the first place.

That said, the book does not move much further. Hartley never really tries to provide a working definition for true "liberal arts" education except to distinguish it STEM or Computer Science. By using the vocabulary of "fuzzy" and "techie" he encountered at Stanford, he inadvertently extends a mentality that has fostered start-ups notably acknowledged to be unfriendly to women. So far as I could determine, a mere handful of Hartley's sources as noted were published elsewhere than digitally–although the "liberal arts," however defined, have a very long tradition of inquiry and literature that Hartley passes by almost breezily, and is very little in evidence. His book is essentially a series of stories of companies and their founders, many of whom did not earn "techie" degrees.

Mark Zuckerberg's famous motto "move fast and break things" utterly discounted the social and cultural values of what might get broken. Partly in consequence, the previously admired prodigies of Silicon Valley start-ups are facing intense social scrutiny in 2017 in part as a result of their ignorance of human fallibility and conflict.
Hartley is on to a real problem, but he needs to do much more homework to see how firmly rooted the false dichotomy between sciences and humanities is rooted in American (and world-wide) culture. The tendency, for example, to regard undergraduate majors as job preparation rather than as disciplined thinking, focused interest and curiosity is so widespread that even Barack Obama displayed it. ("Folks can make a lot more, potentially, with skilled manufacturing or the trades than they might with an art history degree" –Barack Obama's remark in Wisconsin in 2014; he did retract it later).

Genuine discussion of the values of humanities and STEM degrees can only take place with the disciplined thinking, awareness of traditions, and respect for diversity that are hallmarks of a true liberal arts education.

Digital Commons, BePress, and Elsevier: what is Plan B?

The recent acquisition of BePress & Digital Commons by Elsevier has occasioned a flurry snowstorm of commentary and opinion.  Some of that has not been helpful, even though well-intended.  Sacred Heart University Library belongs to a 33-member group call the Affinity Libraries Group.  We are all private, Masters-1 universities (some with several doctoral degrees), relatively mid-size between the Oberlin Group of liberal arts college libraries, and the Association of Research Libraries (ARL).

Much of the following is going to be discussed at a meeting alongside or outside the coming CNI meeting in December in Washington DC –but since CNI is expensive ($8,200/year), SHU is not a member, nor are I suspect other Affinity Libraries.  I am hoping that, using one technology or another, the Affinity Libraries can have a conversation as well. 

Affinity Group has changed over the years; we (or they, meaning our predecessor directors) used to meet often, sometimes in quite successful stand-alone events not connected with another event, for example, ALA Annual.  Others have said to me that in some ways the Affinity Group (as it was then) really came down to “professional and personal friends of Lew Miller” (former director at Butler), and while I’m not sure that’s fair, it is accurate in the sense that personal relationships formed a strong glue for the group. As directors retired or moved on, group adhesiveness accordingly changed. I’m avoiding the word or metaphor “decline” here because sometimes things just change, and Affinity Group has been one of them.  No one has been sitting around in the meantime.

We do share a strong commitment to the annual Affinity Group statistics. Perhaps now a discussion about institutional repositories and Digital Commons in particular could garner some interest with attention directed to issues for libraries of our size.

Some of the hoopla surrounding Elsevier’s acquisition of BePress has simply given occasion to express contributors’ intense dislike of Elsevier and its business model of maximizing profits above all else, certainly a justified objection given the state of all our budgets.

I think the anonymous Library Loon (Gavia Libraria) has pretty well summed up various points (though I don’t agree with every one of her statements), and Matt Ruen’s subsequent comment on August 9 is also helpful.  Paul Royster at University of Nebraska—Lincoln wrote on September 7 on the SPARC list:

The staff at BePress have been uniformly helpful and responsive, and there is no sign of that changing. They are the same people as before. They have never interfered with our content. I do not believe Elsevier paid $150 million in order to destroy BePress. What made it worth that figure was 1. the software, 2. the staff, and 3. the reputation and relationships.BePress became valuable by listening to their customers; Elsevier could learn a lot from them about managing relationships–and I hope they do.  BePress is also in a different division (Research) than the publications units that have treated libraries and authors so high-handedly. The stronger BePress remains, the better will be its position vis-a-vis E-corp going forward. Bashing BePress over its ownership and inciting its customers to jump ship strikes me as not in the best interests of the IRs or the faculty who use them. 

Almost every college library has relationships with Elsevier already; deserting BePress is not a moral victory of right over wrong. The moral issue here is providing wider dissemination and free access for content created by faculty scholars. No one does that better than BePress, and until that changes, I see no cause for panic. Of course there are no guarantees, and it is always wise to have a Plan B and an exit strategy. But cutting off BePress to spite their new ownership does not really help those we are trying to serve.

I share Royster’s primary commitment freely to disseminate content created by faculty scholars. Digital Commons has done that for SHU in spades, and has been a game-changer in this university and library, in my experience. I know that many share such a primary commitment; many also share enduring and well-grounded suspicion of just about anything Elsevier might do.  As a firm, their behavior often has been so downright divisive and sneaky (we can tell our stories…)  When I first read of the sale, my gut response was, “Really? Great, here’s big problem when I don’t really want another.”   Digital Commons is one of the three major applications that power my library: 1) the integrated library services platform; 2) Springshare’s suite of research & reference applications, and 3) BePress.  Exiting BePress would be distracting, distressing, and downright burdensome.  As Royster writes, “there are no guarantees.”  Now we have to have Plan B and an exit strategy, even if we never use it.

What I fear most is Gavia Libraria’s last option (in her blog post): that Elsevier will simply let “BePress languish undeveloped, with an eye to eventually shrugging and pulling the plug on it.”  I have seen similar “application decay” with ebrary, RefWorks, and (actually) SerialsSolutions, several of which have languished (or are languishing) for years before any genuine further development.  I watched their talented creators and originating staff members drift away into other ventures (e.g., ThirdIron).  Were that to happen, it would be bad news for SHU and other Affinity members.  Royster’s statement “they are the same people as before” has not always held true in the past when smaller firms become subject to hiring processes mandated by larger organizations (e.g., SerialsSolutions’ staff members now employed by ProQuest).

On SPARC’s list, there has been great discussion about cooperation & building a truly useful non-profit, open-source application suite for institutional repository, digital publishing, authors’ pages (like SelectedWorks), etc.  Everyone knows that’s a long way off, without any disrespect to Islandora, Janeway, DSpace, or any other application.  DigitalCommons and SelectedWorks is pretty well the state of the art, and its design and consequent workflow decisions have benefited the small staff of the SHU Library enormously (even with the occasional hiccups and anomalies). Digital Commons Network has placed SHU in the same orbit or gateway as far larger and frankly more prestigious colleges and universities, and I could not be happier with that.  I have my own SelectedWorks page and I like it.  I would be sorry to see all this go –unless a truly practical alternative emerges.  Who knows when that will be?

In the meantime, we will be giving attention to Plan B –until now we have not had one or felt we needed one (–probably an unfortunate oversight, but it just did not become a priority).  I really don’t yet know what our Plan B will be.

I sense that if OCLC were to develop a truly useful alternative to Digital Commons (one well beyond DSpace as it presently exists), it might have some traction in the market (despite all of our horror stories about OCLC, granted).  Open Science Framework, Islandora, or others hold promise but really probably cannot yet compete feature-by-feature with Digital Commons (at least, I have not seen anything that really even close).  If you think I’m wrong, please say so! –I will gladly accept your correction.

Do You Know Yewno, and If Yewno, Exactly What Do You Know?

This is the third of "undiscovered summer reading" posts, see also the first and second.

At the recent Association of College and Research Libraries conference Baltimore I came across Yewno, a search-engine-like discovery or exploration layer that I had heard about.  I suspect that Yewno or something like it could be the "next big thing" in library and research services.  I have served as a librarian long enough both to be very interest, and to be wary at the same time –so many promises have been made by the information technology commercial sector and the reality fallen far short —remember the hype about discovery services?

Yewno-logoYewno is a so-called search app; it "resembles as search engines –you use it to search for information, after all–but its structure is network-like rather than list-based, the way Google's is. The idea is to return search results that illustrate relationships between relevant sources" –mapping them out graphically (like a mind map). Those words are quoted from Adrienne LaFrance's Atlantic article on growing understanding of the Antikythera mechanism as an example of computer-assisted associative thinking (see, all these readings really come together).  LaFrance traces the historical connections between "undiscovered public knowledge," Vannevar Bush's Memex (machine) in the epochal As We May Think, and Yewno.  The hope is that through use of an application such as Yewno, associations could be traced between ancient time-keeping, Babylonian and Arabic mathematics, medieval calendars, astronomy, astrological studies, ancient languages, and other realms of knowledge. At any rate, that's the big idea, and it's a good one.

So who is Yewno meant for, a what's it based on?

Lafrance notes that Yewno "was built primarily for academic researchers," but I'm not sure that's true, strictly. When I visited the Yewno booth at ACRL, I thought several things at once: 1) this could be very cool; 2) this could actually be useful; 3) this is going to be expensive (though I have neither requested nor received a quote); and 4) someone will buy them, probably Google or another technology octopus. (Subsequent thought: where's Google's version of this?)  I also thought that intelligence services and corporate intelligence advisory firms would be very, very interested –and indeed they are.  Several weeks later I read Alice Meadows' post, "Do You Know About Yewno?" on the Scholarly Kitchen blog, and her comments put Yewno in clearer context. (Had I access to Yewno, I would have searched, "yewno.")

Yewno is a start-up venture by Ruggero Gramatica (if you're unclear, that's a person), a research strategist with a background in applied mathematics (Ph.D. King's College, London) and M.B.A. (University of Chicago). He is first-named author of "Graph Theory Enables Drug Repurposing," a paper (DOI) on PLOS One that introduces:

a methodology to efficiently exploit natural-language expressed biomedical knowledge for repurposing existing drugs towards diseases for which they were not initially intended. Leveraging on developments in Computational Linguistics and Graph Theory, a methodology is defined to build a graph representation of knowledge, which is automatically analysed to discover hidden relations between any drug and any disease: these relations are specific paths among the biomedical entities of the graph, representing possible Modes of Action for any given pharmacological compound. We propose a measure for the likeliness of these paths based on a stochastic process on the graph.

Yewno does the same thing in other contexts:

an inference and discovery engine that has applications in a variety of fields such as financial, economics, biotech, legal, education and general knowledge search. Yewno offers an analytics capability that delivers better information and faster by ingesting a broad set of public and private data sources and, using its unique framework, finds inferences and connections. Yewno leverages on leading edge computational semantics, graph theoretical models as well as quantitative analytics to hunt for emerging signals across domains of unstructured data sources. (source: Ruggero Gramatica's LinkedIn profile)

This leads to several versions of Yewno: Yewno Discover, Yewno Finance, Yewno Life Sciences, and Yewno Unearth.  Ruth Pickering, the companies co-founder and CEO of Business Development & Strategy Officer, comments, "each vertical uses a specific set of ad-hoc machine learning based algorithms and content. The Yewno Unearth product sits across all verticals and can be applied to any content set in any domain of information."  Don't bother calling the NSA –they already know all about it (and probably use it, as well).

Yewno Unearth is relevant to multiple functions of publishing: portfolio categorization, the ability to spot gaps in content, audience selection, editorial oversight and description, and other purposes for improving a publisher's position, both intellectually and in the information marketplace. So  Yewno Discovery is helpful for academics and researchers, but the whole of Yewno is also designed to relay more information about them to their editors, publishers, funders, and those who will in turn market publications to their libraries.  Elsevier, Ebsco, and ProQuest will undoubtedly appear soon in librarians' offices with Yewno-derived information, and that encounter likely could prove to be truly intimidating.  So Yewno might be a very good thing for a library, but not simply an unalloyed very good thing.

So what is Yewno really based on? The going gets more interesting.

Meadows notes that Yewno's underlying theory emerged from the field of complex systems at the foundational level of econophysics, an inquiry "aimed at describing economic and financial cycles utilized mathematical structures derived from physics." The mathematical framework, involving uncertainty, stochastic (random probability distribution) processes and nonlinear dynamics, came to be applied to biology and drug discovery (hello, Big Pharma). This kind of information processing is described in detail in a review article, Deep Learning in Nature (Vol. 521, 28 May 2015, doi10.1038/nature14539).  Developing machine learning, deep learning "allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction."  Such deep learning "discovers intricate structure in are data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer." Such "deep convolutional nets" have brought about significant break-throughs when processing images, video, speech, and "recurrent nets" have brought new learning powers to "sequential data such as text and speech."

The article goes on in great detail, and I do not pretend I understand very much of it.  Its discussion of recurrent neural networks (RNNs), however, is highly pertinent to libraries and discovery.  The backpropagational algorithm is basically a process that adjusts the weights used in machine analysis while that analysis is taking place.  For example, RNNs "have been found to be very good at predicting the next character in the text, or next word in a sequence," and by such backpropagational adjustments, machine language translations have achieved greater levels of accuracy. (But why not complete accuracy? –read on.)  The process "is more compatible with the view that everyday reasoning involves many simultaneous analogies that each contribute plausibility to a conclusion." In their review's conclusion, the authors expect "systems that use RNNs to understand sentences or whole documents will become much better when they learn strategies for selectively attending to one part at a time."

After all this, what do you know? Yewno presents the results of deep learning through recurrent neural networks that identify nonlinear concepts in a text, a kind of "knowledge." Hence Ruth Pickering can plausibly state:

Yewno's mission is "Knowledge Singularity" and by that we mean the day when knowledge, not information, is at everyone's fingertips. In the search and discovery space the problems that people face today are the overwhelming volume of information and the fact that sources are fragmented and dispersed. There' a great T.S. Eliot quote, "Where's the knowledge we lost in information" and that sums up the problem perfectly. (source: Meadows' post)

Ms. Pickering perhaps revealed more than she intended.  Her quotation from T.S. Eliot is found in a much larger and quite different context:

Endless invention, endless experiment,
Brings knowledge of motion, but not of stillness;
Knowledge of speech, but not of silence;
Knowledge of words, and ignorance of the Word.
All our knowledge brings us nearer to our ignorance,
All our ignorance brings us nearer to death,
But nearness to death no nearer to GOD.
Where is the Life we have lost in living?
Where is the wisdom we have lost in knowledge?
Where is the knowledge we have lost in information?
The cycles of Heaven in twenty centuries
Bring us farther from GOD and nearer to the Dust. (Choruses from The Rock)

Eliot's interest is in the Life we have lost in living, and his religious and literary use of the word "knowledge" signals the puzzle at the very base of econophysics, machine learning, deep learning, and backpropagational algorithms.  Deep learning performed by machines mimics what humans do, their forms of life.  Pickering's "Knowledge Singularity" alludes to the semi-theological vision of the Ray Kurzweil's millennialist "Singularity;" a machine intelligence infinitely more powerful than all human intelligence combined.  In other words, where Eliot is ultimately concerned with Wisdom, the Knowledge Singularity is ultimately concerned with Power.  Power in the end means power over other people: otherwise it has no social meaning apart from simply more computing.  Wisdom interrogates power, and questions its ideological supremacy.

For example, three researchers at the Center for Information Technology Policy at Princeton University have shown that "applying machine learning to ordinary human language results in human-like semantic biases." ("Semantics derived automatically from language corpora contain human-like biases," Science 14 April 2017, Vol. 356, issue 6334: 183-186, doi 10.1126/science.aal4230). The results of their replication of a spectrum of know biases (measured by the Implicit Association Test) "indicate that text corpora contain recoverable and accurate imprints of our historic biases, whether morally neutral as towards insects or flowers, problematic as race and gender, for even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names. Their approach holds "promise for identifying and addressing sources of bias in culture, including technology."  The authors laconically conclude, "caution must be used in incorporating modules constructed via unsupervised machine learning into decision-making systems."  Power resides in decisions such decisions about other people, resources, and time.

Arvind Narayanan, who published the paper with Aylin Caliskan and Joanna J. Bryson, noted that "we have a situation where these artificial-intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from."  Princeton researchers developed an experiment with a program called GloVe that replicated the Implicit Association test in machine-learning representation of co-occurent words and phrases.  Researchers at Stanford turn this loose on roughtly 840 billion words from the Web, and looked for co-occurences and associations of words such as "man, male" or "woman, female" with "programmer engineer scientist, nurse teacher, librarian."   They showed familiar biases in distributions of associations, biases that can "end up having pernicious, sexist effects."

For example, machine-learning programs can translate foreign languages into sentences taht reflect or reinforce gender stereotypes. Turkish uses a gender-neutral, third person pronoun, "o."  Plugged into the online translation service Google Translate, however, the Turkish sentence "o bir doktor" and "o bir hemşire" are translated into English as "he is a doctor" and "she is a nurse."  . . . . "The Biases that we studied in the paper are easy to overlook when designers are creating systems," Narayanan said. (Source: Princeton University, "Biased Bots" by Adam Hadhazy.)

Yewno is exactly such a system insofar as it mimics human forms of life which include, alas, the reinforcement of biases and prejudice.  So in the end, do you know Yewno, and if Yewno, exactly what do you know? –that "exactly what" will likely contain machine-generated replications of problematic human biases.  Machine translations will never offer perfect, complete translations of languages because language is never complete –humans will always use it new ways, with new shades of meaning and connotations of plausibility, because human go on living in their innumerable, linguistic forms of life.  Machines have to map language within language (here I include mathematics as kinds of languages with distinctive games and forms of life).  No "Knowledge Singularity" can occur outside of language, because it will be made of language: but the ideology of "Singularity" can conceal its origins in many forms of life, and thus appear "natural," "inevitable," and "unstoppable." 

The "Knowledge Singularity" will calcify bias and injustice in an everlasting status quo unless humans, no matter how comparatively deficient, resolve that knowledge is not a philosophical problem to be solved (such as in Karl Popper's Worlds 1, 2, and 3), but a puzzle to be wrestled with and contested in many human forms of life and language (Wittgenstein). Only by addressing human forms of life can we ever address the greater silence and the Life that we have lost in living.  What we cannot speak about, we must pass over in silence (Wovon man nicht sprechen kann, darüber muss man schweigen, sentence 7 of the Tractatus) –and that silence, contra both the positivist Vienna Circle and Karl Popper (who was never part of it) is the most important part of human living.  In the Tractatus Wittengenstein dreamt, as it were, a conclusive solution to the puzzle of language –but such a solution can only be found in the silence beyond strict logical (or machine) forms: a silence of the religious quest beyond the ethical dilemma (Kierkegaard).

This journey through my "undiscovered summer reading," from the Antikythera mechanism to the alleged "Knowledge Singularity," has reinforced my daily, functional belief that knowing is truly something that humans do within language and through language, and that the quest which makes human life human is careful attention to the forms of human life, and the way that language, mathematics, and silence are woven into and through those forms. The techno-solutionism inherent in educational technology and library information technology –no matter how sophisticated– cannot undo the basic puzzle of human life: how do we individually and social find the world? (Find: in the sense of locating, of discovering, and of characterizing.)  Yewno will not lead to a Knowledge Singularity, but to derived bias and reproduced injustice, unless we acknowledge its limitations within language. 

The promise of educational and information technology becomes more powerful when approached with modesty: there are no quick, technological solutions to puzzles of education, of finance, of information discovery, of "undiscovered public knowledge."  What those of us who are existentially involved with the much-maligned, greatly misunderstood, and routinely dismissed "liberal arts" can contribute is exactly what makes those technologies humane: a sense of modesty, proportion, generosity, and silence.  Even to remember those at this present moment is a profoundly counter-cultural act, a resistance of the techno-idology of unconscious bias and entrenched injustice.

Undiscovered Summer Reading: The Puzzles of Information and Educational Technologies

(This is part two of posts of my summer reading thus far: see parts one  and three.

Another article in found in my strange cleaning mania is not so very old: George Veletsianos and Rolin Moe's The Rise of Educational Technology as a Sociocultural and Ideological Phenomenon. Published by (upper-case obligatory) EDUCAUSE, it argues that "the rise of educational technology is part of a larger shift in political thought" that favors (so-called) free-market principles to government oversight, and is also a response to the increasing costs of higher education.  Edtech proponents have (always? often?) "assumed positive impacts, promoting an optimistic rhetoric despite little empirical evidence of results –and ample documentation of failures."  In other words, we are in the presence of a powerful ideology, and an ideology of the powerful: the neoliberal state and its allies in higher education.

The authors frame their argument through assertions:  The edtech phenomenon is a response to the increasing price of higher education: seen as a way of slow, stop, or reverse prices.  The popular press questions the viability of college degrees, higher education, sometimes with familiar "bubble" language borrowed from market analyses.  Second: The edtech phenomenon reflects a shift in political thought from government to free-market oversight of education: reducing governmental involvement and funding along with increasing emphases on market forces "has provided a space and an opportunity for the edtech industry to flourish." Although set vastly to accelerate under Donald Trump and Betsy DeVos, funding reductions and a turn to "private sector" responses have long been in evidence, associated with the "perspective" (the authors eschew "ideology") of neoliberalism: the ideology that the free, market competition invariably results in improved services at lower costs.  Outsourcing numerous campus services supposedly leads to lower costs, but also "will relegate power and control to non-institutional actors" (and that is what neoliberalism is all about).

The authors (thirdly) assert "the edtech phenomenon is symptomatic of a view of education as product to be package, automated, and delivered" –in other words, neoliberal service and production assumptions transferred to education.  This ideology is enabled by a "curious amnesia, forgetfulness, or even willful ignorance" (remember: we are in the presence of an ideology) "of past phases of technology development and implementation in schools."  When I was in elementary schools (late 1950s and 1960s), the phase was filmstrips, movies, and "the new math," and worked hand-in-glove with Robert McNamara's Ford Corporation, and subsequent Department of Defense, to "scale" productivity-oriented education for obedient workers and soldiers (the results of New Math, were in my case disastrous, and I am hardly alone).  The educational objectivism implicit in much of edtech sits simultaneously and oddly with tributes to professed educational constructivism –"learning by doing," which tends then to be reserved for those who can afford it in the neoliberal state.  I have bristled when hearing the cliché that the new pedagogy aims for "the guide on the side, not the sage on the stage" –when my life and outlook have been changed by carefully crafted, deeply engaging lectures (but remember: we are in the presence of an ideology).

Finally, the authors assert "the edtech phenomenon is symptomatic of the technocentric belief that technology is the most efficient solution to the problems of higher education."  There is an ideological solutionism afoot here. Despite a plethora of evidence to the contrary, techno-determinism (technology shapes its emerging society autonomously) and techno-solutionism (technology will solve societal problems) assumes the power of "naturally given," a sure sign of ideology.  Ignorance of its history and impact "is illustrated by public comments arguing that the education system has remained unchanged for hundreds of years" (by edX CEO Anant Agarwal, among others), when the reality is that of academia's constant development and change of course.  Anyone who thinks otherwise should visit a really old institution such as Oxford University: older instances of architecture meant to serve medieval educational practices, retro-fitted to 19th- and early 20th-century uses, and now sometimes awkwardly retro-fitted yet again to the needs of a modern research university.  The rise and swift fall of MOOCs is another illustration of the remarkable ignorance that ideological techno-solutionism mandates in order to appear "smart" (or at least in line with Gartner's hype cycle).

The authors conclude, "unless greater collaborative efforts take place between edtech developers and the greater academic community, as well as more informed deep understandings of how learning and teaching actually occur, any efforts to make edtech education's silver bullet are doomed to fail."  They recommend that edtech developers and implementers commit to support their claims with empirical evidence "resulting from transparent and rigorous evaluation processes" (!–no "proprietary data" here); invite independent expertise; attend to discourse (at conferences and elsewhere) critical of edtech rather than merely promotional, and undertake reflection that is more than personal, situational, or reflective of one particular institutional location.  Edtech as a scholarly field and community of practice could in this was continue efforts to improve teaching and learning that will bear fruit for educators, not just for corporate technology collaborators.

How many points of their article are relevant by extension to library information technology, its implementation, and reflections on its use!  Commendably, ACRL and other professional venues have subjected library technologies to critical review and discourse (although LITA's Top Technology Trends Committee too often reverts to techno-solutionism and boosterism from the same old same old).  Veletsianos' and Moe's points are regarding the neoliberal ideological suppositions of the library information technology market, however, are well-taken –just attend a conference presentation on the exhibition floor from numerous vendors for a full demonstration.  At the recent conference of the Association of College & Research Libraries, the critical language of the Information Literacy was sometimes turned on librarianship and library technology itself ("authority is constructed and contextual"), such as critique of the term "resilient" (.pdf) and the growing usage of the term "wicked challenges" for those times we don't know what we don't know or even know how to ask what that would be.

Nevertheless, it would be equally historically ignorant to deny the considerable contributions made by information technology to contemporary librarianship, even when such contributions should be regarded cautiously.   There are still intereting new technologies which can contribute a great deal even when they are neither disruptive nor revolutionary.  The most interesting (by far) new kind of technology or process I saw at ACRL is Yewno, and I will discuss that in my third blog post.

Is Undiscovered Public Knowledge A Problem or a Puzzle?

(This is the first of three posts about my semi-serendipitous summer reading; here are links to posts two and three.)

This last week I was seized by a strange mania: clean the office. I have been in my current desk and office since 2011 (when a major renovation disrupted it for some months).  It was time to clean –spurred by notice that boxes of papers would be picked up for the annual certified, assured shredding. I realized I had piles of FERPA-protected paperwork (exams, papers, 1-1 office hours memos, you name it).  Worse: my predecessor had left me large files that I hadn't look at in seven years, and that contained legal papers, employee annual performance reviews, old resumes, consultant reports, accreditation documentation, etc. Time for it all to go!  I collected six large official boxes (each twice the size of a paper ream), but didn't stop there: I also cleaned the desk; cleaned up the desktop; recycle odd electronic items, batteries, and lightbulbs; forwarded a very large number of vendor advertising pens to cache for our library users ("do you have a pen?"). On Thursday I was left with the moment-after: I cleared it all out: now what?

The "what" turned out to be various articles I had collected and printed for later reading, and then never actually read –some more recent, some a little older. (This doesn't count the articles I recycled as no longer relevant or particularly interesting; my office is not a bibliography in itself.) Unintentionally, several of these articles wove together concerns that have been growing in the back of my mind –and have been greatly pushed forward with the events of the past year (Orlando–Bernie Sanders–the CombOver–British M.P. Jo Cox–seem as distant and similar as events of the late Roman republic now, pace Mary Beard.)

"Undiscovered public knowledge" seems an oxymoron (but less one than "Attorney General Jeff Sessions").  If "public" than why "undiscovered"?  It means the knowledge that once was known by someone, recorded, properly interred in some documentary vault, and left unexamined and undiscovered by anyone else.  The expression is used in Adrienne LaFrance's Searching for Lost Knowledge in the Age of Intelligent Machines, published in The Atlantic, December 1, 2016.   Her leading example is the fascinating story of the Antikythera mechanism, some sort of ancient time-piece surfaced from an ancient, submerged wreck off Antikythera (a Greek island between the Peloponnese and Crete, known also as Aigila or Ogylos).  It sat in the crate outside the National Archaeological Museum in Athens for a year, and then was largely forgotten by all but a few dogged researchers, who pressed on for decades with the attempt to figure out exactly what it is.

The Antikythera mechanism has only come to be understood when widely separated knowledge has been combined by luck, persistence, intuition, and conjecture.  How did such an ancient time piece come about, who made it, based upon which thinking, from where?  It could not have been a one-off, but it seems to be a unique lucky find from the ancient world, unless other mechanisms or pieces are located elsewhere in undescribed or poorly described collections.  For example, a 10th-century Arabic manuscript suggests that such a mechanism may have influenced the development of modern clocks, and in turn built upon ancient Babylonian astronomical data.  (For more see Josephine Marchant's Decoding the heavens : a 2,000-year-old computer–and the century-long search to discover its secrets, Cambridge, Mass.: DaCapo Press, 2009: Worldcat ; Sacred Heart University Library). Is there "undiscovered public knowledge" that would include other mechanisms, other clues to its identity, construction, development, and influence?

"Undiscovered public knowledge" is a phrase made modestly famous by Don R. Swanson in an article by the same name in The Library Quarterly, 1986.  This interesting article is a great example of the way that library knowledge and practice tends to become isolated in the library silo, when it might have benefited many others located elsewhere. (It is also a testimony to the significant, short-sighted mistake made by the University of Chicago, Columbia University, and others, in closing their library science programs in the 1980s-1990s just when such knowledge was going public in Yahoo, Google, Amazon, GPS applications and countless other developments.)  Swanson's point is that "independently created fragments are logically related but never retrieved, brought together, and interpreted." The "essential incompleteness" of search (or now: discovery) makes "possible and plausible the existence of undiscovered public knowledge." (to quote the abstract –the article is highly relevant and well developed).  Where Swanson runs into trouble, however, is his use of Karl Popper's distinction between subjective and objective knowledge, the critical approach within science that distinguishes between "World 2" and "World 3."  (Popper's Three Worlds (.pdf), lectures at the University of Michigan in 1978, were a favorite of several of my professors at Columbia University School of Library Service; Swanson's article in turn was published and widely read while I was studying there.)

Popper's critical worlds (1: physical objects and events, including biological; 2: mental objects and events; 3: objective knowledge, a human but not Platonic zone) both enable the deep structures of information science as now practiced by our digital overlords as well and signal their fatal flaw.  They do this (enable the deep structures and algorithms of "discovery") by assuming the link between physical objects and events, mental objects, and objective knowledge symbolically notated (language, mathematics). Simultaneously Popper's linkage also signals their fatal flaw: such language (and mathematics) is or are used part-and-parcel in innumerable forms of human life and their languages "games," where the link between physical objects, mental objects, and so-called objective knowledge is puzzling, in addition to a never-ending source of philosophical delusion.

To sum up:  Google thinks its algorithm is serving up discoveries of objective realities, when it is really extending the form of life called "algorithm" –no "mere" here, but in fact an ideological extension of language that conceals its power relations and manufactures the assumed sense that such discovery is "natural."  It is au contraire a highly developed, very human form of life parallel to, and participating in, innumerable other forms of life, and just as subject to their foibles, delusions, illogic, and mistakes as any other linguistic form of life. There is no "merely" (so-called "nothing-buttery") to Google's ideological extension: it is very powerful and seems, at the moment, to rule the world.  Like every delusion, however, it could fall "suddenly and inexplicably," like an algorithmic Berlin Wall, and "no one could have seen it coming" –because of the magnificent illusion of ideology (as in the Berlin Wall, ideology on both sides, as well, upheld by both the CIA and the KGB).

This is once again to rehearse the crucial difference between Popper's and Wittgenstein's understandings of science and knowledge.  A highly relevant text is the lucid, short Wittgenstein's Poker: The Story of a Ten-Minute Argument Between Two Great Philosophers, (by David Edmonds and John Eidinow, Harper Collins, 2001; Worldcat).  Wittgenstein: if we can understand the way language works from within language (our only vantage point), most philosophical problems will disappear, and we are left with puzzles and mis-understandings that arise when we use improperly the logic of our language.  Popper: Serious philosophical problems exist with real-world consequences, and a focus upon language only "cleans its spectacles" to enable the wearer to see the world more clearly.  (The metaphor is approximately Popper's; this quick summary will undoubtedly displease informed philosophers, and I beg their forgiveness, for the sake of brevity.)

For Wittgenstein, if I may boldly speculate, Google would only render a reflection of ourselves, our puzzles, mis-understandings, and mistakes. Example: search "white girls," then clear the browser of its cookies (this is important), and search "black girls."  Behold the racial bias. The difference in Google's search results points to machine-reproduced racism that would not have surprised Wittgenstein, but seems foreign to the Popper's three worlds.  Google aspires to Popper's claims of objectivity, but behaves very differently –at least, its algorithm does.  No wonder its algorithm has taken on the aura of an ancient deity: it serves weal and woe without concern for the fortunes of dependent mortals. Except . . . it's a human construct.

So, Swanson's article identifies and makes plausible "undiscovered public knowledge" because of the logical and essential incompleteness of discovery (what he called "search"): discovery signals a wide variety of human forms of life, and no algorithm can really anticipate them.  The Antikythera mechanism, far from an odd example, is a pregnant metaphor for the poignant frailties of human knowledge and humans' drive to push past their limits. Like the Archimedes palimpsest, "undiscovered public knowledge" is one of the elements that makes human life human –without which we become, like the Q Continuum in Star Trek: Next Generation, merely idle god-like creatures of whim and no moral gravitas whatsoever.  The frailty of knowledge –the it is made up of innumerable forms of human life, which have to be lived by humans rather than algorithms– gives the human drive to know its edge, and its tragedy.  A tragic sense of life, however, is antithetical to the tech-solutionist ideology of the algorithm.

(Continued in the second post, Undiscovered Summer Reading)

11 Responses for my 1 Colleague at Dartmouth College

More than a month ago, Joshua Kim asked eleven questions of his colleagues at ACRL 2017 (Association of College and Research Libraries, Baltimore, March 22-25).  I simply cannot keep up the pace of writing and work that apparently characterizes Dr. Kim (I ask, as does Barbara Fister: "how does he do it?").  I don't have answers — but at an embarrassing long delay, here are my responses to his eleven questions:

Question 1: What is keeping academic librarians up at night?

Allergy to book mold?  Seriously: the mis-match between 1) the transformations of libraries and librarians now in process, 2) unrealistic expectations, both within and outside academe, about the transformation of higher education by digital technology, and 3) available financial resources.  There is barely enough money to fund our members' needs day-to-day, much less the future needs of members who will bring very high, very different expectations to their education and research.  Training and re-training librarians, even younger, newer ones, will also be not only a significant expense, but an unavoidable one if they are to remain relevant and aligned with their institutional mission.

Question 2: What will the academic library look like in 2025?

Libraries will still living in a both-and world: some members will continue to want printed books, but will use them differently than the recent past (2015).  Some printed books will explicitly engage digital resources as supplements and complements.  Some members will never want to see or open a printed book.  Some members will be working far more with data sets and non-textual (or ostensibly non-textual) information visualizations.  Libraries will have fewer printed books on site, more workspaces (and more different kinds), less reader or member privacy, and more commercialization by monetized information organizations.  Members will still want a physical library space to be a place to get and stay "on task."  Some members will never interact physically with a library, or personally with librarians, but will use library services every day through the information configurations that librarians will tend and troubleshoot.

Question 3: How is the academic librarian profession changing?

Already there is far more emphasis upon communication, instructional, and design skills than ten years ago (even than five years ago).   Technical, back-office skills are rapidly changing from the provision and editing of information to aligning interactive and interoperable information systems.  Library leadership is especially challenged to visualize what could be, might be, or will be, to our stakeholders, and figure out how to achieve all that, and yet negotiate present-day campus political, financial, and legal arrangements that often reflect patterns and processes that are already obsolete.

Question 4: What is the role of the academic library in leading institutional transformation?

Preface: If universities are like the proverbial elephant as regarded by the visually impaired, libraries are very well-positioned (almost uniquely) to see a great deal of the elephant. The daily life of a library interacts with faculty who are teaching and doing research right now, academic leadership, policy and planning, campus operations of all kinds, public safety and security, university financial offices, alumni/ae relationships, enrollment retention, student recruitment, faculty recruitment, information technology, instructional design, construction and facilities management, and sometimes even the food service.  As a library director, I have almost all of those people on speed-dial.

To answer this question: libraries are almost uniquely well-positioned to act as change agents by partnering with a wide variety of interests to achieve a cumulative social, educational, intellectual impact on campus beyond the abilities or purview of any one campus organization.  This requires vision, street smarts, and an ability to listen.  It requires seeing that library priorities are not always the institution's priorities, but when different they need to come into some kind of symbiosis.

Question 5: How do academic librarians think about learning innovation?

It's a broad term.  Librarians would love to collaborate with instructional designers, faculty members, and others to create pathways for learning that transcend previous classroom, lab, and practice settings.   Organizations, consultants, and academic specialists will all be part of that –but many academic librarians I know are increasingly suspicious of the corporate interests in the phrase "learning innovation."  The innovations to learning in higher education that will be most productive are those that will not be packaged and sold by corporate interests, but will be far more local, ad-hoc, and malleable by teachers and learners themselves.  When I view a video purporting to be about Learning Innovation, and the head of Thomas Friedman talks, then I begin to wonder whose interests are really going to be served.

Question 6:  What is the role of the academic library in leading institutional efforts [that will] drive progress in the iron triangle of costs, access, and quality?

Academic librarians have a lot of experience with the trade-offs of costs, access limitations, and quality (both of information per se, and of presentation and interface).  Part of my daily life is putting budget numbers and academic ways-and-means together.  I believe the academic library's role can be incubator, initiator, and assessor of costs for access and quality of outcomes but that role is not guaranteed.  I believe that academic librarians will also want to challenge the oft-encountered (perhaps dominant) idea that instructional quality is simply a cost that limits institutional net income.  The recent ACE paper Instructional Quality, Student Outcomes, and Institutional Finances (.pdf) points at research that needs to be done, and assumptions that should be interrogated.

Question 7: What does the academic library leadership pipeline look like?

I heard some real concerns at ACRL about how the field will mentor future leaders who will need the financial, political, academic, and social skills necessary to lead a complicated organization on a complicated campus (physical or digital).  The relative slow-down in professional movement, promotion, and retirements in the years after 2009, coupled with either outright downsizing or less (immediately) drastic holds on hiring, have produced a situation where there is not a sufficient number of opportunities for rising leaders to learn their craft.  The profession is greying, and I cannot blame recent college graduates who bypass library and information science programs in favor of fields in which they will be able to pay off their substantial student debts more readily.  Yet we really need those people, and we need creative, competent new professionals of every age who will contribute their perspectives and learn how business actually gets done in many institutions.

Question 8:  How is the academic library addressing challenges around diversity and inclusion?

This was also a major theme of the ACRL conference, and built up to the simply fabulous closing keynote by Carla Hayden, Librarian of Congress.  These challenges play differently in contexts: large academic library systems can pursue strategies and mentorships that may not be practical for much smaller libraries –where challenges and real needs for diversity of perspectives, persons, and inclusions of all kinds of persons are still very much present and felt.  I think this is a real opportunity for ACRL: to lead a multi-sided approach with library and information schools, foundations and grantors, large and small academic libraries, national, state, and regional library associations (in particular with library technology, and leadership & management divisions), and academic administrative organizations (AAC&U, ACAD, and HERC), for a cumulative impact on the profession, the libraries, and the universities.  I think that the professional leadership education offered by the Harvard Graduate School of Education is also a vital and viable venue to put together the efforts of many organizations.

Question 9:  What are the big arguments and debates within the academic library discipline?

I find so many that I will inevitably leave some out of even a very long list.  But here's my short list:

  • Privacy, user security, and trust: maintaining the academic library digital and physical space as a non-commercial zone of exception, as much as possible; and the way that user's searches and downloads can become monetized data points for commercial services that offer a false equivalent to a real library–and whether librarians can really do anything about that, or respond to it usefully;
  • Evolving understandings or interpretations of the Information Literacy Framework: what it brings in, leaves out, interrogates, and strengthens, and the sometimes yawning gap between the aspirations of the Framework and the sometimes frightening realities of many young students' lack of curiosity and joy;
  • The persistent tensions between "resilience" as a good term for a kind of creative flexibility in the face of adversity, and "resilience" as a substitution of a personal response for a solution to a structural problem.  I heard one speaker use the work and then immediately apologize for it after a standing-room-only presentation called Resilience, Grit and Other Lies: Academic Libraries and the Myth of Risiliency (.pdf)

Other attendees are welcome to point out all the good fights I missed.

Question 10:  How is the relationship between academic libraries and centers for teaching and learning (CTLs) evolving?

I'm not sure there is a consensus, since there are so many variables in academic contexts. I know of one case where a beautifully renovated CTL in fact combined about 10 other services formerly located elsewhere in a large university.  That set of new partnerships has required so much team-building and re-negotiation that librarians in the same building have not had as much contact as they previously anticipated (this may be changing recently).  Where the CTL and librarian partners have sufficient contact and are not completely frustrated by funding limitations (or non-existence), I have heard that enormously fruitful partnerships evolving.  Open Educational Resources, Open Textbooks, and so many other hot topics really call for multi-sided collaborations.  My favorite anecdote is of an information technologist with a strong secondary background in instructional design who exclaimed, "Wow, there's a lot of information technology in the library!"  I believe that many libraries, especially on undergraduate-oriented campuses, were attempting to be centers for teaching and learning before the phrase was invented, and in those places where turf is not a source of conflict, creative partnerships are forming.

The recent Ithaka S+R survey of library directors found that "while [many] library directors agreed that librarians at their institutions contribute significantly to student learning in a variety of ways, only about half of the faculty members for the Ithaka S+R Faculty Survey 2015 recognized these contributions." (pages 3-4) I suspect that both Centers for Teaching and Learning and academic libraries face a common challenge to communicate what they can (and do) contribute to faculty who are genuinely skeptical or worried about maintaining their turf.

Question 11:  What questions should I be asking about the changing academic library?

The fact that you are asking any questions is remarkable for many academic librarians, who have so often felt marginalized (for reasons good and bad) by campus technology and technologists in the past couple of decades.  You don't take anything for granted.

I can only really respond by suggesting the question that I'm asking as I lead my organization through a process of listening, thinking and planning together: what is our core mission in plain language?  What is our value proposition for our institution?  How do we show that we are doing that?  How does our mission and value proposition align with our institution's proclaimed commitments and priorities?

In this process I have spoken with many people on my campus, and (following advice from a mentor) I asked each of them simply:  "What is your job?  What difference does your office make here?  What's your biggest challenge?"  Their responses were amazing, and almost all of them pointed in one direction: "how do we communicate to a skeptical world what an amazing difference real learning can make in a student's life?"  To the extent that our library can respond to that question with grace and authenticity, we can also state our value proposition and our mission, and our alignment with our university.

Even with all the challenges, controversies, and constraints, this is the best time ever to be an academic librarian.