Skip to content

source: applift.com

Two recent articles or reports, published completely separately but oddly complementary, give shape to the ominous information landscape today, so hostile to expertise and alien to nuance. The first is published in Nature, "Information Gerrymandering and Undemocratic Decisions," by Alexander J. Stewart et al.; the other (.pdf) is Source Hacking: Media Manipulation in Practice, by Joan Donovan and Brian Friedberg, by the digital think tank Data & Society, founded by danah boyd (lower case). Donovan and Friedberg have roles in the Technology and Social Change Research Project of the Shorenstein Center of the Harvard Kennedy School of Government at Harvard University.

"Information Gerrymandering" reports results of an experiment in which people were recruited to participate in a voting game, involving 2,500 participants and 120 iterations. The game divided participants into two platforms, purple or yellow, and the goal was to win the most votes (first past the post). Would-be winners had to convince others to join their party; in the event of a deadlock, both parties lose. The authors writes, "a party is most effective when it influences the largest possible number of people just enough to flip their votes, without wasting influence on those who are already convinced." When willingness to compromise is unevenly distributed, those who have a lot of zealots, who in principle oppose any compromise, have an advantage. When both sides use such a zealous strategy, however, deadlock results and both sides lose.

To seed the game the authors added influencers, whom they dubbed "zealous bots" to argue against compromise and persuade others to agree with them. They ran the test in Europe and America (whether purple or yellow was better), and then ran similar analyses in UK and USA legislative bodies. They write,

[O]ur study on the voter game highlights how sensitive collective decisions are to information gerrymandering on an influence network, how easily gerrymandering can arise in realistic networks and how widespread it is in real-world networks of political discourse and legislative process. Our analysis provides a new perspective and a quantitative measure to study public discourse and collective decisions across diverse contexts. . . .

Symmetric influence assortment allows for democratic outcomes, in which the expected vote share of a party is equal to its representation among voters; and low influence assortment allows decisions to be reached with broad consensus despite different partisan goals. A party that increases its own influence assortment relative to that of the other party by coordination, strategic use of bots or encouraging a zero-sum worldview benefits from information gerrymandering and wins a disproportionate share of the vote—that is, an undemocratic outcome. However, other parties are then incentivized to increase their own influence assortment, which leaves everyone trapped in deadlock."

Information Gerrymandering and Undemocratic Decisions, p. 120

This is oddly synchronous with current events (August-September 2019), which seem turbo-charged to attract attention and conflict, and to deflect persuasion and obfuscate any nuance. Zealotry is a strategy to maximize attention and conflict, and to discourage the nuance that makes compromise and persuasion possible. Those who shout the loudest get the most attention. Zealous bots, indeed!

That's where the second article comes in, Source Hacking. Zealots can now use online manipulation in very specific ways with extremely fine-grained methods on very narrow slices of online attention or "eyes." Donovan and Friedberg call this "source hacking," a set of techniques for hiding the sources of misleading or false information, in order to circulate it widely in "mainstream" media. These techniques or tactics are:

  • Viral sloganeering, repackaging extremist talking points for social media and broadcast media amplification;
  • Leak forgery, creating a spectacle by sharing false or counterfeit documents;
  • Evidence collages, consisting of misinformation from multiple sources that is easily shareable, often as images (hence collages);
  • Keyword squatting, strategic domination of keywords via manipulation and "sock-puppet" false-identity accounts, in order to misrepresent the behavior of disfavored groups or opponents.

The authors ask journalists and media figures to understand how viral slogans ("jobs not mobs" was a test case), and to understand their role in inadvertently assisting covertly planned campaigns by extremists to popularize a slogan already frequently shared in highly polarized online communities, such as Reddit groups or 4chan boards. "Zealous bots" indeed!

Taken together, these two articles vividly delineate how zealots can take over information exchanges and trim their "boundaries" of discourse (gerrymander them) to depress any and all persuasion, nuance, or complexity. These zealots do so by using very precise tactics of viral sloganeering, leaking forged documents, creating collages of false or highly misleading evidence pasted together from bits of truth, and domination of certain keywords (squatting) so as to manipulate algorithms and engage in distortion, blaming, and threats. Taken together, such communication reaches a "tipping point" (a phrase used by Claire Wardle of First Draft News in 2017) in which misinformation and misrepresentation overwhelm any accurate representation, nuanced discussion, persuasion, to meaningful exchange.

Those who wanted to "move fast and break things" have certainly succeeded, and it remains to be seen whether anything can remain whole in their wake, outside of communities of gift (scholarly) exchange explicitly dedicated to truth and discernment. Libraries have to house, encourage, foment, and articulate those values and communities --hardly a value-free librarianship, and one that does risk sometimes tolerating unjust power relationships because their alternatives are even worse.

The ultimate question for a responsible man to ask is not how he is to extricate himself heroically from the affair, but how the coming generation is to live! It is only from this question, with the responsibility towards history, that fruitful solutions can come, even if for the time being they are very humiliating.

Dietrich Bonhoeffer, "After Ten Years," 1943, translated and published in Letters and Paper from Prison

(And no, that is not a nod to a certain court evangelical who pretends to understand Bonhoeffer, but who can't speak a word of German, and is simply a shoddy scholar.)

Barbara Fister helpfully pointed out why librarians should not be intimidated by Kanopy video's tactics with library users.
Intimidation sculpture by Michel Rathwell from Cornwall, Canada
[CC BY 2.0 (https://creativecommons.org/licenses/by/2.0)]

Barbara Fister's post on InsideHigherEd Unkind Rewind (June 26) is totally correct about Kanopy Streaming Video's creepy tactics, contacting users directly when a library cancels its Kanopy contract. This is an outrageous abuse of user data and has the long-term effect of completely undermining librarians' trust in the Kanopy organization.

Barbara references the Twitter feed @Libskrat as an oblique reference to Kanopy complaining to New York Public Library management when a librarian spoke out about their practices on a mailing list (AKA "listserv" but that's trademarked in the USA). Kanopy referenced a supposed NDA and may have threatened legal action. If this violates any NDA my library inadvertently agreed to, then let Kanopy bring it on. I do not see any non-disclosure stipulated in the Kanopy TOS (terms of service), but other terms are fairly creepy: the requirement to submit to binding arbitration by the American Arbitration Association, which is as good as useless. I may seek to cancel my library's Kanopy account on that basis alone. I expect my University General Counsel may in fact require that I do so.

In addition, Kanopy's privacy policy allows them (at 2.(a)), in their view, to abuse librarians' trust outrageously:

We may ask for certain information such as your name, institution name, email address, password and other information. We may retain any messages you send through the Service, and we may also retain other information you voluntarily provide to us. We use this information to operate, maintain and provide to you the features and functionality of the Service, and as further described below.

Barbara Fister (back to her blog) makes one claim, though, with which I differ:

For librarians, my advice is to resist the shiny and trust we are relevant, to value the rights we traditionally have when we purchase content, and push for transparency and fairness in licensing deals.

I did not fail to "resist the shiny" when we began to work with Kanopy in 2014. My library entered into an agreement with Kanopy for good reasons.

We began to work with Kanopy, in the first place, because our Communications school (then department) contracted with Kanopy without informing the library--and then expected to use the library's proxy service. (Actually a rather ignorant former staffer there wanted every student to create her or his own login, an obvious non-starter for Kanopy.) The Communications faculty wanted access to the Media Education Foundation's Media Studies and Communication. Only later did the library add the patron-driven acquisition, user-initiated (PDA) model which proved unsustainably expensive in the past fiscal year. We did so because we needed to move the library into providing streaming video for curricular use, not because it is "shiny" (. . . and some of it is not!)

With Barbara's help (above), I realize that we happily dodged a bullet. Because we could not cut Kanopy off entirely (Communications still wants and has that MEF license), Kanopy has never contacted our users to deplore our decision to discontinue PDA. What we have done instead is more circumspect. We left Kanopy in our A-Z databases list, but publicly discouraged its use. We removed records for any videos that are not licensed from our discovery service.

When an instructor wants to use a video in class (we have some of those), we attempt to re-direct the instructor to Academic Video ONline (AVON), which we have leased from ProQuest at a more affordable (and controllable) price. If no suitable content is available, we will reluctantly authorize a PDA license for that one video --but we make sure the instructor knows how much this it costs for 365 days. If a student wants access for a class or paper, we gently deny the request. (We can distinguish student requests from faculty because students have a slightly different e-mail domain address.) We make sure that department chairs, program directors, and Deans know how much Kanopy costs. They completely support our plan to control expenses.

So my advice to librarians: don't discontinue Kanopy, simply bury them. Take them out of your A-Z databases list. Remove them from your catalog or discovery service. Act as though they don't exist. Make Kanopy your library's "frenemy." And refuse to knuckle under to anyone's mob-like tactics of intimidation.

Mouse Books give easy access to classic texts in a new format --especially essays or stories that often are not commercially viable on their own. The Mouse Books project wants to offer readers more ideas, insight, and connections for readers' lives.

Brothers_K_grande_246097a1-d3d3-4008-856a-487204748363_540xThe digital era was supposed to make books and lengthy reading obsolete: Larry Sanger (co-founder of Wikipedia, originator of citizendium.org and WatchKnowLearn.org) memorably critiqued faulty assumptions in 2010, Individual Knowledge in the Internet Age (here as .pdf; see also my posts here and here). "Boring old books" played a part. Clay Shirky of NYU wrote, "the literary world is now losing its normative hold" on our culture," "--no one reads War and Peace. It's too long, and not so interesting. . . This observation is no less sacrilegious for being true." Ah, the satisfying thunk of a smashed idol. Goodbye, long, boring not so interesting books.

Except that a funny thing has happened on the way to the book burning. (Danke schoen, Herr Goebbels) Printed books have somehow held on: unit sales of print books were up 1.9% in 2016, at 687.2 million world-wide, the fourth straight year of print growth. Rumors of demise now seem premature. What gives?

The print book is far more subtly crafted than many digital soothsayers realize. Printed books have evolved continuously since Gutenberg: just take a look at scholarly monographs from 1930, 1950, 1970, 1990, and 2010. The current printed book, whether popular, trade, high-concept, or scholarly monograph, is a highly-designed and highly-evolved object.  Publishers are very alert to readers' desires and what seems to work best.  It was hubris to think that a lazily conceived and hastily devised digital book format could simply replace a printed book with an object equally useful: look at the evolution of the epub format (for example).

Designers will always refer to what has been designed previously, as well as new and present needs and uses when designing an object: consider the humble door. Poorly done e-books were a product of the "move fast and break things" culture that doomed many ideas that appealed to thinking deeper than the one-sided imaginations of bro-grammer digital denizens.

Enter Mouse Books. Some months ago David Dewane was riding the bus in Chicago. "[I] happened to be reading a physical book that was a piece of classic literature. I wondered what all the other people on the bus were reading." He wondered, why don't those people read those authors on their smart phones? "I wondered if you made the book small enough—like a passport or a smart notebook—if you could carry it around with you anywhere."

David and close friends began to experiment, and eventually designed printed books the size and thickness of a mobile phone. They chose classic works available in the public domain, either complete essays (Thoreau's On the Duty of Civil Disobedience) or chapters (Chapters 4 and 5 of The Brothers Karamazov, "The Grand Inquisitor," in Constance Garnett's translation. These are simply, legibly printed in Bookman Old Style 11-point font. Each book or booklet is staple bound ("double stitched") with a sturdy paper cover, 40-50 pages, 3 1/2 by 5 1/2 inches or just about 9 by 14 cm --a very high quality, small product.

David and the Mouse Team (Disney copyright forbids calling them Mouseketeers) aim for ordinary users of mobile phones. They want to provide a serious text that can be worn each day "on your body" in a pocket, purse, or bag, and gives a choice between pulling out the phone or something more intellectually and emotionally stimulating. Mouse Books give easy access to classic texts in a new format --especially essays or stories that often are not commercially viable on their own (such as Melville's Bartleby the Scrivener, or Thoreau's essay, which are invariably packaged with other texts in a binding that will bring sufficient volume and profit to market.) The Mouse Books project wants to offer readers more ideas, insight, and connections for readers' lives.

As a business, Mouse Books is still experimental, and has sought "early adopters:" willing co-experimentalists and subjects. This means experimenting with the practice of reading, with classics texts of proven high quality, and complementing the texts with audio content, podcasts, and a social media presence. These supplements are also intended to be mobile --handy nearly anywhere you could wear ear buds.

As a start-up or experiment, Mouse Books has stumbled from time to time in making clear what a subscriber would get for funding the project on Kickstarter, what the level of subscriptions are, and differences in US and outside-the-US subscriptions. The subscriptions levels on the Mouse Books drip (or d.rip) site do not match the subscription option offered directly on the Mouse Books Club web site. As a small "virtual company," this kind of confusion goes with the territory --part of what "early adaptors" come to expect. That said, Mouse Books is also approaching sufficient scale that marketing clarity will be important for the project to prosper.

This is a charming start-up that deserves support, and is highly consonant with the mission of librarians: to connect with others both living and dead, to build insight, to generate ideas. The printed book and those associated with it--bookstores, libraries, editors, writers, readers, thinkers--are stronger with innovative experiments such as Mouse Books. The printed book continues to evolve, and remains a surprisingly resilient re-emergent, legacy technology.

More about Mouse books:

Web site: https://mousebookclub.com/collections/mouse-books-catalog

drip site (blog entries): https://d.rip/mouse-books?

Video:

 

Hartley argues that liberal arts educations widen a student's horizon, inquire about human behavior and find opportunities for products and services that will meet human needs. The "softer" subjects helps persons to determine which problem they're trying to solve in the first place.

FuzzyAndTheTechieJacketCoverThe Fuzzy and the Techie: Why the Liberal Arts Will Rule the Digital World, by Scott Hartley.  New York: Houghton Mifflin Harcourt, 2017. ISBN 978-0544-944770 $28.00 List.

Hartley writes that a "false dichotomy" divides computer sciences and the humanities, and extends this argue to STEM curricula as well. For example, Vinod Khosla of Sun Microsystems has claimed that "little of the material taught in liberal arts programs today is relevant to the future." Hartley believes that such a mind-set is wrong, for several reasons. Such a belief encourages students to pursue learning only in vocational terms: preparing for a job. STEM field require intense specialization, but some barrier to coding (for example) are dropping with web services or communities such as GitHub and Stack Overflow. Beyond narrow vocational boundaries, Hartley argues that liberal arts educations widen a student's horizon, inquire about human behavior and find opportunities for products and services that will meet human needs. The "softer" subjects helps persons to determine which problem they're trying to solve in the first place.

That said, the book does not move much further. Hartley never really tries to provide a working definition for true "liberal arts" education except to distinguish it STEM or Computer Science. By using the vocabulary of "fuzzy" and "techie" he encountered at Stanford, he inadvertently extends a mentality that has fostered start-ups notably acknowledged to be unfriendly to women. So far as I could determine, a mere handful of Hartley's sources as noted were published elsewhere than digitally--although the "liberal arts," however defined, have a very long tradition of inquiry and literature that Hartley passes by almost breezily, and is very little in evidence. His book is essentially a series of stories of companies and their founders, many of whom did not earn "techie" degrees.

Mark Zuckerberg's famous motto "move fast and break things" utterly discounted the social and cultural values of what might get broken. Partly in consequence, the previously admired prodigies of Silicon Valley start-ups are facing intense social scrutiny in 2017 in part as a result of their ignorance of human fallibility and conflict.
Hartley is on to a real problem, but he needs to do much more homework to see how firmly rooted the false dichotomy between sciences and humanities is rooted in American (and world-wide) culture. The tendency, for example, to regard undergraduate majors as job preparation rather than as disciplined thinking, focused interest and curiosity is so widespread that even Barack Obama displayed it. ("Folks can make a lot more, potentially, with skilled manufacturing or the trades than they might with an art history degree" --Barack Obama's remark in Wisconsin in 2014; he did retract it later).

Genuine discussion of the values of humanities and STEM degrees can only take place with the disciplined thinking, awareness of traditions, and respect for diversity that are hallmarks of a true liberal arts education.

The recent acquisition of BePress & Digital Commons by Elsevier has occasioned a snowstorm of commentary and opinion.  Some of that has not been helpful, even though well-intended.

The recent acquisition of BePress & Digital Commons by Elsevier has occasioned a flurry snowstorm of commentary and opinion.  Some of that has not been helpful, even though well-intended.  Sacred Heart University Library belongs to a 33-member group call the Affinity Libraries Group.  We are all private, Masters-1 universities (some with several doctoral degrees), relatively mid-size between the Oberlin Group of liberal arts college libraries, and the Association of Research Libraries (ARL).

Much of the following is going to be discussed at a meeting alongside or outside the coming CNI meeting in December in Washington DC –but since CNI is expensive ($8,200/year), SHU is not a member, nor are I suspect other Affinity Libraries.  I am hoping that, using one technology or another, the Affinity Libraries can have a conversation as well. 

Affinity Group has changed over the years; we (or they, meaning our predecessor directors) used to meet often, sometimes in quite successful stand-alone events not connected with another event, for example, ALA Annual.  Others have said to me that in some ways the Affinity Group (as it was then) really came down to “professional and personal friends of Lew Miller” (former director at Butler), and while I’m not sure that’s fair, it is accurate in the sense that personal relationships formed a strong glue for the group. As directors retired or moved on, group adhesiveness accordingly changed. I’m avoiding the word or metaphor “decline” here because sometimes things just change, and Affinity Group has been one of them.  No one has been sitting around in the meantime.

We do share a strong commitment to the annual Affinity Group statistics. Perhaps now a discussion about institutional repositories and Digital Commons in particular could garner some interest with attention directed to issues for libraries of our size.

Some of the hoopla surrounding Elsevier’s acquisition of BePress has simply given occasion to express contributors’ intense dislike of Elsevier and its business model of maximizing profits above all else, certainly a justified objection given the state of all our budgets.

I think the anonymous Library Loon (Gavia Libraria) has pretty well summed up various points (though I don’t agree with every one of her statements), and Matt Ruen’s subsequent comment on August 9 is also helpful.  Paul Royster at University of Nebraska—Lincoln wrote on September 7 on the SPARC list:

The staff at BePress have been uniformly helpful and responsive, and there is no sign of that changing. They are the same people as before. They have never interfered with our content. I do not believe Elsevier paid $150 million in order to destroy BePress. What made it worth that figure was 1. the software, 2. the staff, and 3. the reputation and relationships.BePress became valuable by listening to their customers; Elsevier could learn a lot from them about managing relationships--and I hope they do.  BePress is also in a different division (Research) than the publications units that have treated libraries and authors so high-handedly. The stronger BePress remains, the better will be its position vis-a-vis E-corp going forward. Bashing BePress over its ownership and inciting its customers to jump ship strikes me as not in the best interests of the IRs or the faculty who use them. 

Almost every college library has relationships with Elsevier already; deserting BePress is not a moral victory of right over wrong. The moral issue here is providing wider dissemination and free access for content created by faculty scholars. No one does that better than BePress, and until that changes, I see no cause for panic. Of course there are no guarantees, and it is always wise to have a Plan B and an exit strategy. But cutting off BePress to spite their new ownership does not really help those we are trying to serve.

I share Royster’s primary commitment freely to disseminate content created by faculty scholars. Digital Commons has done that for SHU in spades, and has been a game-changer in this university and library, in my experience. I know that many share such a primary commitment; many also share enduring and well-grounded suspicion of just about anything Elsevier might do.  As a firm, their behavior often has been so downright divisive and sneaky (we can tell our stories…)  When I first read of the sale, my gut response was, “Really? Great, here’s big problem when I don’t really want another.”   Digital Commons is one of the three major applications that power my library: 1) the integrated library services platform; 2) Springshare’s suite of research & reference applications, and 3) BePress.  Exiting BePress would be distracting, distressing, and downright burdensome.  As Royster writes, “there are no guarantees.”  Now we have to have Plan B and an exit strategy, even if we never use it.

What I fear most is Gavia Libraria’s last option (in her blog post): that Elsevier will simply let “BePress languish undeveloped, with an eye to eventually shrugging and pulling the plug on it.”  I have seen similar “application decay” with ebrary, RefWorks, and (actually) SerialsSolutions, several of which have languished (or are languishing) for years before any genuine further development.  I watched their talented creators and originating staff members drift away into other ventures (e.g., ThirdIron).  Were that to happen, it would be bad news for SHU and other Affinity members.  Royster’s statement “they are the same people as before” has not always held true in the past when smaller firms become subject to hiring processes mandated by larger organizations (e.g., SerialsSolutions’ staff members now employed by ProQuest).

On SPARC’s list, there has been great discussion about cooperation & building a truly useful non-profit, open-source application suite for institutional repository, digital publishing, authors’ pages (like SelectedWorks), etc.  Everyone knows that’s a long way off, without any disrespect to Islandora, Janeway, DSpace, or any other application.  DigitalCommons and SelectedWorks is pretty well the state of the art, and its design and consequent workflow decisions have benefited the small staff of the SHU Library enormously (even with the occasional hiccups and anomalies). Digital Commons Network has placed SHU in the same orbit or gateway as far larger and frankly more prestigious colleges and universities, and I could not be happier with that.  I have my own SelectedWorks page and I like it.  I would be sorry to see all this go –unless a truly practical alternative emerges.  Who knows when that will be?

In the meantime, we will be giving attention to Plan B –until now we have not had one or felt we needed one (--probably an unfortunate oversight, but it just did not become a priority).  I really don’t yet know what our Plan B will be.

I sense that if OCLC were to develop a truly useful alternative to Digital Commons (one well beyond DSpace as it presently exists), it might have some traction in the market (despite all of our horror stories about OCLC, granted).  Open Science Framework, Islandora, or others hold promise but really probably cannot yet compete feature-by-feature with Digital Commons (at least, I have not seen anything that really even close).  If you think I’m wrong, please say so! –I will gladly accept your correction.

if you know Yewno, and if Yewno, exactly what do you know? --that "exactly what" will likely contain machine-generated replications of problematic human biases.

This is the third of "undiscovered summer reading" posts, see also the first and second.

At the recent Association of College and Research Libraries conference Baltimore I came across Yewno, a search-engine-like discovery or exploration layer that I had heard about.  I suspect that Yewno or something like it could be the "next big thing" in library and research services.  I have served as a librarian long enough both to be very interest, and to be wary at the same time --so many promises have been made by the information technology commercial sector and the reality fallen far short --remember the hype about discovery services?

Yewno-logoYewno is a so-called search app; it "resembles as search engines --you use it to search for information, after all--but its structure is network-like rather than list-based, the way Google's is. The idea is to return search results that illustrate relationships between relevant sources" --mapping them out graphically (like a mind map). Those words are quoted from Adrienne LaFrance's Atlantic article on growing understanding of the Antikythera mechanism as an example of computer-assisted associative thinking (see, all these readings really come together).  LaFrance traces the historical connections between "undiscovered public knowledge," Vannevar Bush's Memex (machine) in the epochal As We May Think, and Yewno.  The hope is that through use of an application such as Yewno, associations could be traced between ancient time-keeping, Babylonian and Arabic mathematics, medieval calendars, astronomy, astrological studies, ancient languages, and other realms of knowledge. At any rate, that's the big idea, and it's a good one.

So who is Yewno meant for, a what's it based on?

Lafrance notes that Yewno "was built primarily for academic researchers," but I'm not sure that's true, strictly. When I visited the Yewno booth at ACRL, I thought several things at once: 1) this could be very cool; 2) this could actually be useful; 3) this is going to be expensive (though I have neither requested nor received a quote); and 4) someone will buy them, probably Google or another technology octopus. (Subsequent thought: where's Google's version of this?)  I also thought that intelligence services and corporate intelligence advisory firms would be very, very interested --and indeed they are.  Several weeks later I read Alice Meadows' post, "Do You Know About Yewno?" on the Scholarly Kitchen blog, and her comments put Yewno in clearer context. (Had I access to Yewno, I would have searched, "yewno.")

Yewno is a start-up venture by Ruggero Gramatica (if you're unclear, that's a person), a research strategist with a background in applied mathematics (Ph.D. King's College, London) and M.B.A. (University of Chicago). He is first-named author of "Graph Theory Enables Drug Repurposing," a paper (DOI) on PLOS One that introduces:

a methodology to efficiently exploit natural-language expressed biomedical knowledge for repurposing existing drugs towards diseases for which they were not initially intended. Leveraging on developments in Computational Linguistics and Graph Theory, a methodology is defined to build a graph representation of knowledge, which is automatically analysed to discover hidden relations between any drug and any disease: these relations are specific paths among the biomedical entities of the graph, representing possible Modes of Action for any given pharmacological compound. We propose a measure for the likeliness of these paths based on a stochastic process on the graph.

Yewno does the same thing in other contexts:

an inference and discovery engine that has applications in a variety of fields such as financial, economics, biotech, legal, education and general knowledge search. Yewno offers an analytics capability that delivers better information and faster by ingesting a broad set of public and private data sources and, using its unique framework, finds inferences and connections. Yewno leverages on leading edge computational semantics, graph theoretical models as well as quantitative analytics to hunt for emerging signals across domains of unstructured data sources. (source: Ruggero Gramatica's LinkedIn profile)

This leads to several versions of Yewno: Yewno Discover, Yewno Finance, Yewno Life Sciences, and Yewno Unearth.  Ruth Pickering, the companies co-founder and CEO of Business Development & Strategy Officer, comments, "each vertical uses a specific set of ad-hoc machine learning based algorithms and content. The Yewno Unearth product sits across all verticals and can be applied to any content set in any domain of information."  Don't bother calling the NSA --they already know all about it (and probably use it, as well).

Yewno Unearth is relevant to multiple functions of publishing: portfolio categorization, the ability to spot gaps in content, audience selection, editorial oversight and description, and other purposes for improving a publisher's position, both intellectually and in the information marketplace. So  Yewno Discovery is helpful for academics and researchers, but the whole of Yewno is also designed to relay more information about them to their editors, publishers, funders, and those who will in turn market publications to their libraries.  Elsevier, Ebsco, and ProQuest will undoubtedly appear soon in librarians' offices with Yewno-derived information, and that encounter likely could prove to be truly intimidating.  So Yewno might be a very good thing for a library, but not simply an unalloyed very good thing.

So what is Yewno really based on? The going gets more interesting.

Meadows notes that Yewno's underlying theory emerged from the field of complex systems at the foundational level of econophysics, an inquiry "aimed at describing economic and financial cycles utilized mathematical structures derived from physics." The mathematical framework, involving uncertainty, stochastic (random probability distribution) processes and nonlinear dynamics, came to be applied to biology and drug discovery (hello, Big Pharma). This kind of information processing is described in detail in a review article, Deep Learning in Nature (Vol. 521, 28 May 2015, doi10.1038/nature14539).  Developing machine learning, deep learning "allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction."  Such deep learning "discovers intricate structure in are data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer." Such "deep convolutional nets" have brought about significant break-throughs when processing images, video, speech, and "recurrent nets" have brought new learning powers to "sequential data such as text and speech."

The article goes on in great detail, and I do not pretend I understand very much of it.  Its discussion of recurrent neural networks (RNNs), however, is highly pertinent to libraries and discovery.  The backpropagational algorithm is basically a process that adjusts the weights used in machine analysis while that analysis is taking place.  For example, RNNs "have been found to be very good at predicting the next character in the text, or next word in a sequence," and by such backpropagational adjustments, machine language translations have achieved greater levels of accuracy. (But why not complete accuracy? --read on.)  The process "is more compatible with the view that everyday reasoning involves many simultaneous analogies that each contribute plausibility to a conclusion." In their review's conclusion, the authors expect "systems that use RNNs to understand sentences or whole documents will become much better when they learn strategies for selectively attending to one part at a time."

After all this, what do you know? Yewno presents the results of deep learning through recurrent neural networks that identify nonlinear concepts in a text, a kind of "knowledge." Hence Ruth Pickering can plausibly state:

Yewno's mission is "Knowledge Singularity" and by that we mean the day when knowledge, not information, is at everyone's fingertips. In the search and discovery space the problems that people face today are the overwhelming volume of information and the fact that sources are fragmented and dispersed. There' a great T.S. Eliot quote, "Where's the knowledge we lost in information" and that sums up the problem perfectly. (source: Meadows' post)

Ms. Pickering perhaps revealed more than she intended.  Her quotation from T.S. Eliot is found in a much larger and quite different context:

Endless invention, endless experiment,
Brings knowledge of motion, but not of stillness;
Knowledge of speech, but not of silence;
Knowledge of words, and ignorance of the Word.
All our knowledge brings us nearer to our ignorance,
All our ignorance brings us nearer to death,
But nearness to death no nearer to GOD.
Where is the Life we have lost in living?
Where is the wisdom we have lost in knowledge?
Where is the knowledge we have lost in information?
The cycles of Heaven in twenty centuries
Bring us farther from GOD and nearer to the Dust. (Choruses from The Rock)

Eliot's interest is in the Life we have lost in living, and his religious and literary use of the word "knowledge" signals the puzzle at the very base of econophysics, machine learning, deep learning, and backpropagational algorithms.  Deep learning performed by machines mimics what humans do, their forms of life.  Pickering's "Knowledge Singularity" alludes to the semi-theological vision of the Ray Kurzweil's millennialist "Singularity;" a machine intelligence infinitely more powerful than all human intelligence combined.  In other words, where Eliot is ultimately concerned with Wisdom, the Knowledge Singularity is ultimately concerned with Power.  Power in the end means power over other people: otherwise it has no social meaning apart from simply more computing.  Wisdom interrogates power, and questions its ideological supremacy.

For example, three researchers at the Center for Information Technology Policy at Princeton University have shown that "applying machine learning to ordinary human language results in human-like semantic biases." ("Semantics derived automatically from language corpora contain human-like biases," Science 14 April 2017, Vol. 356, issue 6334: 183-186, doi 10.1126/science.aal4230). The results of their replication of a spectrum of know biases (measured by the Implicit Association Test) "indicate that text corpora contain recoverable and accurate imprints of our historic biases, whether morally neutral as towards insects or flowers, problematic as race and gender, for even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names. Their approach holds "promise for identifying and addressing sources of bias in culture, including technology."  The authors laconically conclude, "caution must be used in incorporating modules constructed via unsupervised machine learning into decision-making systems."  Power resides in decisions such decisions about other people, resources, and time.

Arvind Narayanan, who published the paper with Aylin Caliskan and Joanna J. Bryson, noted that "we have a situation where these artificial-intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from."  Princeton researchers developed an experiment with a program called GloVe that replicated the Implicit Association test in machine-learning representation of co-occurent words and phrases.  Researchers at Stanford turn this loose on roughtly 840 billion words from the Web, and looked for co-occurences and associations of words such as "man, male" or "woman, female" with "programmer engineer scientist, nurse teacher, librarian."   They showed familiar biases in distributions of associations, biases that can "end up having pernicious, sexist effects."

For example, machine-learning programs can translate foreign languages into sentences taht reflect or reinforce gender stereotypes. Turkish uses a gender-neutral, third person pronoun, "o."  Plugged into the online translation service Google Translate, however, the Turkish sentence "o bir doktor" and "o bir hemşire" are translated into English as "he is a doctor" and "she is a nurse."  . . . . "The Biases that we studied in the paper are easy to overlook when designers are creating systems," Narayanan said. (Source: Princeton University, "Biased Bots" by Adam Hadhazy.)

Yewno is exactly such a system insofar as it mimics human forms of life which include, alas, the reinforcement of biases and prejudice.  So in the end, do you know Yewno, and if Yewno, exactly what do you know? --that "exactly what" will likely contain machine-generated replications of problematic human biases.  Machine translations will never offer perfect, complete translations of languages because language is never complete --humans will always use it new ways, with new shades of meaning and connotations of plausibility, because human go on living in their innumerable, linguistic forms of life.  Machines have to map language within language (here I include mathematics as kinds of languages with distinctive games and forms of life).  No "Knowledge Singularity" can occur outside of language, because it will be made of language: but the ideology of "Singularity" can conceal its origins in many forms of life, and thus appear "natural," "inevitable," and "unstoppable." 

The "Knowledge Singularity" will calcify bias and injustice in an everlasting status quo unless humans, no matter how comparatively deficient, resolve that knowledge is not a philosophical problem to be solved (such as in Karl Popper's Worlds 1, 2, and 3), but a puzzle to be wrestled with and contested in many human forms of life and language (Wittgenstein). Only by addressing human forms of life can we ever address the greater silence and the Life that we have lost in living.  What we cannot speak about, we must pass over in silence (Wovon man nicht sprechen kann, darüber muss man schweigen, sentence 7 of the Tractatus) --and that silence, contra both the positivist Vienna Circle and Karl Popper (who was never part of it) is the most important part of human living.  In the Tractatus Wittengenstein dreamt, as it were, a conclusive solution to the puzzle of language --but such a solution can only be found in the silence beyond strict logical (or machine) forms: a silence of the religious quest beyond the ethical dilemma (Kierkegaard).

This journey through my "undiscovered summer reading," from the Antikythera mechanism to the alleged "Knowledge Singularity," has reinforced my daily, functional belief that knowing is truly something that humans do within language and through language, and that the quest which makes human life human is careful attention to the forms of human life, and the way that language, mathematics, and silence are woven into and through those forms. The techno-solutionism inherent in educational technology and library information technology --no matter how sophisticated-- cannot undo the basic puzzle of human life: how do we individually and social find the world? (Find: in the sense of locating, of discovering, and of characterizing.)  Yewno will not lead to a Knowledge Singularity, but to derived bias and reproduced injustice, unless we acknowledge its limitations within language. 

The promise of educational and information technology becomes more powerful when approached with modesty: there are no quick, technological solutions to puzzles of education, of finance, of information discovery, of "undiscovered public knowledge."  What those of us who are existentially involved with the much-maligned, greatly misunderstood, and routinely dismissed "liberal arts" can contribute is exactly what makes those technologies humane: a sense of modesty, proportion, generosity, and silence.  Even to remember those at this present moment is a profoundly counter-cultural act, a resistance of the techno-idology of unconscious bias and entrenched injustice.

In educational technology, we are in the presence of a powerful ideology, and an ideology of the powerful: the neoliberal state and its allies in higher education.

(This is part two of posts of my summer reading thus far: see parts one  and three.

Another article in found in my strange cleaning mania is not so very old: George Veletsianos and Rolin Moe's The Rise of Educational Technology as a Sociocultural and Ideological Phenomenon. Published by (upper-case obligatory) EDUCAUSE, it argues that "the rise of educational technology is part of a larger shift in political thought" that favors (so-called) free-market principles to government oversight, and is also a response to the increasing costs of higher education.  Edtech proponents have (always? often?) "assumed positive impacts, promoting an optimistic rhetoric despite little empirical evidence of results --and ample documentation of failures."  In other words, we are in the presence of a powerful ideology, and an ideology of the powerful: the neoliberal state and its allies in higher education.

The authors frame their argument through assertions:  The edtech phenomenon is a response to the increasing price of higher education: seen as a way of slow, stop, or reverse prices.  The popular press questions the viability of college degrees, higher education, sometimes with familiar "bubble" language borrowed from market analyses.  Second: The edtech phenomenon reflects a shift in political thought from government to free-market oversight of education: reducing governmental involvement and funding along with increasing emphases on market forces "has provided a space and an opportunity for the edtech industry to flourish." Although set vastly to accelerate under Donald Trump and Betsy DeVos, funding reductions and a turn to "private sector" responses have long been in evidence, associated with the "perspective" (the authors eschew "ideology") of neoliberalism: the ideology that the free, market competition invariably results in improved services at lower costs.  Outsourcing numerous campus services supposedly leads to lower costs, but also "will relegate power and control to non-institutional actors" (and that is what neoliberalism is all about).

The authors (thirdly) assert "the edtech phenomenon is symptomatic of a view of education as product to be package, automated, and delivered" --in other words, neoliberal service and production assumptions transferred to education.  This ideology is enabled by a "curious amnesia, forgetfulness, or even willful ignorance" (remember: we are in the presence of an ideology) "of past phases of technology development and implementation in schools."  When I was in elementary schools (late 1950s and 1960s), the phase was filmstrips, movies, and "the new math," and worked hand-in-glove with Robert McNamara's Ford Corporation, and subsequent Department of Defense, to "scale" productivity-oriented education for obedient workers and soldiers (the results of New Math, were in my case disastrous, and I am hardly alone).  The educational objectivism implicit in much of edtech sits simultaneously and oddly with tributes to professed educational constructivism --"learning by doing," which tends then to be reserved for those who can afford it in the neoliberal state.  I have bristled when hearing the cliché that the new pedagogy aims for "the guide on the side, not the sage on the stage" --when my life and outlook have been changed by carefully crafted, deeply engaging lectures (but remember: we are in the presence of an ideology).

Finally, the authors assert "the edtech phenomenon is symptomatic of the technocentric belief that technology is the most efficient solution to the problems of higher education."  There is an ideological solutionism afoot here. Despite a plethora of evidence to the contrary, techno-determinism (technology shapes its emerging society autonomously) and techno-solutionism (technology will solve societal problems) assumes the power of "naturally given," a sure sign of ideology.  Ignorance of its history and impact "is illustrated by public comments arguing that the education system has remained unchanged for hundreds of years" (by edX CEO Anant Agarwal, among others), when the reality is that of academia's constant development and change of course.  Anyone who thinks otherwise should visit a really old institution such as Oxford University: older instances of architecture meant to serve medieval educational practices, retro-fitted to 19th- and early 20th-century uses, and now sometimes awkwardly retro-fitted yet again to the needs of a modern research university.  The rise and swift fall of MOOCs is another illustration of the remarkable ignorance that ideological techno-solutionism mandates in order to appear "smart" (or at least in line with Gartner's hype cycle).

The authors conclude, "unless greater collaborative efforts take place between edtech developers and the greater academic community, as well as more informed deep understandings of how learning and teaching actually occur, any efforts to make edtech education's silver bullet are doomed to fail."  They recommend that edtech developers and implementers commit to support their claims with empirical evidence "resulting from transparent and rigorous evaluation processes" (!--no "proprietary data" here); invite independent expertise; attend to discourse (at conferences and elsewhere) critical of edtech rather than merely promotional, and undertake reflection that is more than personal, situational, or reflective of one particular institutional location.  Edtech as a scholarly field and community of practice could in this was continue efforts to improve teaching and learning that will bear fruit for educators, not just for corporate technology collaborators.

How many points of their article are relevant by extension to library information technology, its implementation, and reflections on its use!  Commendably, ACRL and other professional venues have subjected library technologies to critical review and discourse (although LITA's Top Technology Trends Committee too often reverts to techno-solutionism and boosterism from the same old same old).  Veletsianos' and Moe's points are regarding the neoliberal ideological suppositions of the library information technology market, however, are well-taken --just attend a conference presentation on the exhibition floor from numerous vendors for a full demonstration.  At the recent conference of the Association of College & Research Libraries, the critical language of the Information Literacy was sometimes turned on librarianship and library technology itself ("authority is constructed and contextual"), such as critique of the term "resilient" (.pdf) and the growing usage of the term "wicked challenges" for those times we don't know what we don't know or even know how to ask what that would be.

Nevertheless, it would be equally historically ignorant to deny the considerable contributions made by information technology to contemporary librarianship, even when such contributions should be regarded cautiously.   There are still intereting new technologies which can contribute a great deal even when they are neither disruptive nor revolutionary.  The most interesting (by far) new kind of technology or process I saw at ACRL is Yewno, and I will discuss that in my third blog post.

"Undiscovered public knowledge" seems an oxymoron. If "public" than why "undiscovered" --means the knowledge that once was known by someone, recorded, properly interred in some documentary vault, and left unexamined.

(This is the first of three posts about my semi-serendipitous summer reading; here are links to posts two and three.)

This last week I was seized by a strange mania: clean the office. I have been in my current desk and office since 2011 (when a major renovation disrupted it for some months).  It was time to clean --spurred by notice that boxes of papers would be picked up for the annual certified, assured shredding. I realized I had piles of FERPA-protected paperwork (exams, papers, 1-1 office hours memos, you name it).  Worse: my predecessor had left me large files that I hadn't look at in seven years, and that contained legal papers, employee annual performance reviews, old resumes, consultant reports, accreditation documentation, etc. Time for it all to go!  I collected six large official boxes (each twice the size of a paper ream), but didn't stop there: I also cleaned the desk; cleaned up the desktop; recycle odd electronic items, batteries, and lightbulbs; forwarded a very large number of vendor advertising pens to cache for our library users ("do you have a pen?"). On Thursday I was left with the moment-after: I cleared it all out: now what?

The "what" turned out to be various articles I had collected and printed for later reading, and then never actually read --some more recent, some a little older. (This doesn't count the articles I recycled as no longer relevant or particularly interesting; my office is not a bibliography in itself.) Unintentionally, several of these articles wove together concerns that have been growing in the back of my mind --and have been greatly pushed forward with the events of the past year (Orlando--Bernie Sanders--the CombOver--British M.P. Jo Cox--seem as distant and similar as events of the late Roman republic now, pace Mary Beard.)

"Undiscovered public knowledge" seems an oxymoron (but less one than "Attorney General Jeff Sessions").  If "public" than why "undiscovered"?  It means the knowledge that once was known by someone, recorded, properly interred in some documentary vault, and left unexamined and undiscovered by anyone else.  The expression is used in Adrienne LaFrance's Searching for Lost Knowledge in the Age of Intelligent Machines, published in The Atlantic, December 1, 2016.   Her leading example is the fascinating story of the Antikythera mechanism, some sort of ancient time-piece surfaced from an ancient, submerged wreck off Antikythera (a Greek island between the Peloponnese and Crete, known also as Aigila or Ogylos).  It sat in the crate outside the National Archaeological Museum in Athens for a year, and then was largely forgotten by all but a few dogged researchers, who pressed on for decades with the attempt to figure out exactly what it is.

The Antikythera mechanism has only come to be understood when widely separated knowledge has been combined by luck, persistence, intuition, and conjecture.  How did such an ancient time piece come about, who made it, based upon which thinking, from where?  It could not have been a one-off, but it seems to be a unique lucky find from the ancient world, unless other mechanisms or pieces are located elsewhere in undescribed or poorly described collections.  For example, a 10th-century Arabic manuscript suggests that such a mechanism may have influenced the development of modern clocks, and in turn built upon ancient Babylonian astronomical data.  (For more see Josephine Marchant's Decoding the heavens : a 2,000-year-old computer--and the century-long search to discover its secrets, Cambridge, Mass.: DaCapo Press, 2009: Worldcat ; Sacred Heart University Library). Is there "undiscovered public knowledge" that would include other mechanisms, other clues to its identity, construction, development, and influence?

"Undiscovered public knowledge" is a phrase made modestly famous by Don R. Swanson in an article by the same name in The Library Quarterly, 1986.  This interesting article is a great example of the way that library knowledge and practice tends to become isolated in the library silo, when it might have benefited many others located elsewhere. (It is also a testimony to the significant, short-sighted mistake made by the University of Chicago, Columbia University, and others, in closing their library science programs in the 1980s-1990s just when such knowledge was going public in Yahoo, Google, Amazon, GPS applications and countless other developments.)  Swanson's point is that "independently created fragments are logically related but never retrieved, brought together, and interpreted." The "essential incompleteness" of search (or now: discovery) makes "possible and plausible the existence of undiscovered public knowledge." (to quote the abstract --the article is highly relevant and well developed).  Where Swanson runs into trouble, however, is his use of Karl Popper's distinction between subjective and objective knowledge, the critical approach within science that distinguishes between "World 2" and "World 3."  (Popper's Three Worlds (.pdf), lectures at the University of Michigan in 1978, were a favorite of several of my professors at Columbia University School of Library Service; Swanson's article in turn was published and widely read while I was studying there.)

Popper's critical worlds (1: physical objects and events, including biological; 2: mental objects and events; 3: objective knowledge, a human but not Platonic zone) both enable the deep structures of information science as now practiced by our digital overlords as well and signal their fatal flaw.  They do this (enable the deep structures and algorithms of "discovery") by assuming the link between physical objects and events, mental objects, and objective knowledge symbolically notated (language, mathematics). Simultaneously Popper's linkage also signals their fatal flaw: such language (and mathematics) is or are used part-and-parcel in innumerable forms of human life and their languages "games," where the link between physical objects, mental objects, and so-called objective knowledge is puzzling, in addition to a never-ending source of philosophical delusion.

To sum up:  Google thinks its algorithm is serving up discoveries of objective realities, when it is really extending the form of life called "algorithm" --no "mere" here, but in fact an ideological extension of language that conceals its power relations and manufactures the assumed sense that such discovery is "natural."  It is au contraire a highly developed, very human form of life parallel to, and participating in, innumerable other forms of life, and just as subject to their foibles, delusions, illogic, and mistakes as any other linguistic form of life. There is no "merely" (so-called "nothing-buttery") to Google's ideological extension: it is very powerful and seems, at the moment, to rule the world.  Like every delusion, however, it could fall "suddenly and inexplicably," like an algorithmic Berlin Wall, and "no one could have seen it coming" --because of the magnificent illusion of ideology (as in the Berlin Wall, ideology on both sides, as well, upheld by both the CIA and the KGB).

This is once again to rehearse the crucial difference between Popper's and Wittgenstein's understandings of science and knowledge.  A highly relevant text is the lucid, short Wittgenstein's Poker: The Story of a Ten-Minute Argument Between Two Great Philosophers, (by David Edmonds and John Eidinow, Harper Collins, 2001; Worldcat).  Wittgenstein: if we can understand the way language works from within language (our only vantage point), most philosophical problems will disappear, and we are left with puzzles and mis-understandings that arise when we use improperly the logic of our language.  Popper: Serious philosophical problems exist with real-world consequences, and a focus upon language only "cleans its spectacles" to enable the wearer to see the world more clearly.  (The metaphor is approximately Popper's; this quick summary will undoubtedly displease informed philosophers, and I beg their forgiveness, for the sake of brevity.)

For Wittgenstein, if I may boldly speculate, Google would only render a reflection of ourselves, our puzzles, mis-understandings, and mistakes. Example: search "white girls," then clear the browser of its cookies (this is important), and search "black girls."  Behold the racial bias. The difference in Google's search results points to machine-reproduced racism that would not have surprised Wittgenstein, but seems foreign to the Popper's three worlds.  Google aspires to Popper's claims of objectivity, but behaves very differently --at least, its algorithm does.  No wonder its algorithm has taken on the aura of an ancient deity: it serves weal and woe without concern for the fortunes of dependent mortals. Except . . . it's a human construct.

So, Swanson's article identifies and makes plausible "undiscovered public knowledge" because of the logical and essential incompleteness of discovery (what he called "search"): discovery signals a wide variety of human forms of life, and no algorithm can really anticipate them.  The Antikythera mechanism, far from an odd example, is a pregnant metaphor for the poignant frailties of human knowledge and humans' drive to push past their limits. Like the Archimedes palimpsest, "undiscovered public knowledge" is one of the elements that makes human life human --without which we become, like the Q Continuum in Star Trek: Next Generation, merely idle god-like creatures of whim and no moral gravitas whatsoever.  The frailty of knowledge --the it is made up of innumerable forms of human life, which have to be lived by humans rather than algorithms-- gives the human drive to know its edge, and its tragedy.  A tragic sense of life, however, is antithetical to the tech-solutionist ideology of the algorithm.

(Continued in the second post, Undiscovered Summer Reading)

Navigating the shoals of evaluative entanglement requires complex thinking and a certain level of lived experience. There is no app for this. But there is a course, and I recommend Including Ourselves in the Change Equation whole-heartedly to anyone who really wants to change.

Back on January 5 (doesn't that seem like another age of Middle Earth by now?), Cal Newport wrote about what he termed "evaluation entanglement:

  • Evaluation entanglement. Keeping your productivity commitments all tangled in your head can cause problems when a strategy fails. Without more structure to the productivity portion of you life, it’s too easy for your brain to associate that single failure with a failure of your commitments as a whole, generating a systemic reduction in motivation.

Newport was writing in the context of New Year's "productivity tweaks," or what were once called resolutions. They usually go by the boards in a few days or weeks.

So far as I can tell, Newport borrowed "evaluation entanglement" from the physics of entangled states (Newport is a scientist) and description/evaluation entanglement in the work of Hilary Putnam (Newport is also a philosopher) --and both of these clusters disciplines are pertinent and informative for Newport's primary intellectual interest in computer science, distributed algorithms that help agents work together.  

Putnam's work on fact/value entanglement liberated "thick ethical concepts" that lie under sentences such as Nero is cruel from the straightjacket of notions of fact and value judgement, so that Nero is cruel can express a value judgement and a descriptive judgement at the same time.  The entanglement of facts and values is a characteristics of those many statements that" do not acquire value from the outside, from the subject's perspective, for example, but facts that, under certain conditions, have a recognizable and objective value." (Martinez Vidal, Cancela Silva, and Rivas Monroy, Following Putnam's Trail, ISBN 9789042023970, 2008, page 291)

Newport's point is simpler, but lies in the shadow of Putnam's entanglement of facts and values: a person can associate her or his single failure in one element or commitment (to productivity, in this context) with the failure of his or her commitments as a whole --and that this pervasive sense of value can generate "a systemic reduction in motivation."  In other words: I want to be productive in manners or projects A, B, C, and D, "and if I fail at C, I fail at the rest, and my life rots."  The fact of failure with C generates an evaluative entanglement that describes my whole life.

A life so described needs to be described in much thicker terms, however.  Such failure is very rarely simple, straightforward failure.

This is where the work of Robert Kagan and Lisa Laskow Lahey at the Harvard Graduate School of Education is very helpful.  They have focused on immunity to change: both individual's immunity, and organizational immunity.  In their book Immunity to Change, and their large online class (I hesitate to call it MOOC) Including Ourselves in the Change Equation,  they explore and describe the significant difference between undertaking technical means to solve adaptive challenges --when change is not simply a matter of altering well-known behaviors and thought, but involves adapting thinking and finding new mental and emotional complexities at work.  

Based upon adaptive theories of mind and organizational theories of change, Kagan and Lahey take their students on a journey of thinking new thoughts, or telling their stories in a new way --literally, changing the narrative in such as a manner that both visible commitments to change, and corresponding hidden, competing commitments that block change, can reveal a person's (or organization's) big assumptions about the world.  By holding up those big assumptions to the light of understanding and reflection, persons can question effectively and adaptively whether such assumptions in fact are valid.

I took this course last Fall, and found it to be a very rich experience.  I won't reveal what my own visible commitments, hidden competing commitments, and big assumptions were --only to say that I was working on a life-long issue that affects every relationship and commitment in my life.  My goal for change and understanding was something that definitely passed the "spouse test" --"oh yeah, that's you one hundred percent."

Kagan's and Lahey's metaphor one foot on the gas, the other foot on the brake pretty well summed up what I had been finding in attempting changes in my life and character.  That metaphor invokes neatly an "evaluation entanglement" -- both a descriptive judgement and value judgement in the same phrase.  The "thick ethical concept" is a philosophical way of telling a story --telling a narrative of your own life (or your organization's life) that frames the descriptions and the values in a certain way of thinking.  Adapting such thinking to new complexities, and changing the story by expanding and deepening it, is the core structure that liberates a person, and changes and organization, from taking one example of failure as "failure of your commitments as a whole." Such increasing mental complexity and adaptive thinking is critical to avoid "generating a systemic reduction in motivation."  There's nothing that defeats a person or an organization quite like the experience of seeking change but blocking change at the same time, one foot on the gas and the other on the brake.  What is produced is a great deal of heat, significant atmospheric pollution, very little traction, and no progress.

Navigating the shoals of evaluative entanglement requires complex thinking and a certain level of lived experience.  There is no app for this.  But there is a course, and I recommend Including Ourselves in the Change Equation whole-heartedly to anyone who really wants to change.