The Value of Failure

One of the things that stood out for me in this week’s readings was Lisa Spiro’s inclusion of experimentation as a key value of digital humanities, specifically her championing experimentation knowing that failure may be an outcome. Spiro says, “Not all experiments succeed as originally imagined, but the digital humanities community recognizes the value of failure in the pursuit of innovation.” There’s something very liberating in acknowledging that a failure doesn’t render the work leading up to it as necessarily unsuccessful or meritless.

The Torn Apart/Separados project fully owns and explores in depth the mistakes they made in the creation of their site. For example, in Vol. 2, they concede that earlier graphs displaying the total value of ICE contracts were based on miscalculations and misunderstandings of how government contracts work. They also discuss ways their original ideas for representing their data ultimately did not work. They explain how they couldn’t use a word cloud to display the different contract awardees on a scale that was legible and also maintained accuracy. They also had to rethink their display of data pertaining to the self-reported gender and racial demographics of the contract awardees to ensure the clarity of their purpose for including such information: they wanted to expose the diversity of people involved in the project, but in no way do they want to say there needs to be more diversity of people working for ICE, so they moved away from their original pie charts, which mimic the way this data is proudly displayed by the government to demonstrate opportunity.

By including all of this information about their process and methods, both successful and “failed,” I think they are also demonstrating another value highlighted by Spiro: openness. Indeed, there are two volumes of this project now, both of which are available to view, rather than the latter volume replacing the first volume. They admit the first iteration was created with a more narrow purpose (mapping where the detention centers are located, showing that the “border” is everywhere), but that the initial investigations led them to more questions, many of which they grapple with in Vol. 2. And even within Vol. 2 they discuss more questions they were unable to fully answer and visualize (for instance how the centers and ICE are reported in the media) and ways they hope the project can continue. To me, this speaks to the project creators as having open minds and a willingness to engage with the data and source materials responsibly rather than trying to force an outcome or narrative onto them. And through their openness in addressing their shortcomings—or more accurately the unfinished-ness of their project—they have reframed it not as an ultimate failure of the project, but as a call to action to bring more people into the project and continue the work (hello collaboration!).

On a semi-unrelated tangent, I also found the discussion of the peer-to-peer semipublic review process of the Debates in Digital Humanities to be incredibly fascinating. Coming from a background in medical publishing, with a much different peer review process, I admit I never even thought about what peer review could look like in other fields. I think this speaks very much to the collaborative nature of digital humanities and recognizes the value of academic endeavors and pursuits not just in their final product, but also the process through which it was created.

4 thoughts on “The Value of Failure

  1. Rachel Dixon (she/her)

    I was also struck by the acceptance of failure and especially as it relates to the idea of “creating new ignorance:” rather than considering projects failures, they can render new opportunities for inquiry and discovery. DH experimentation sounds particularly adventurous to me when described in this way, though “adventure” may not have been mentioned as a DH value. Coming from a corporate definition of experimentation where failure is not often welcome (even if it leads to new discoveries), I found this encouraging.

    Likewise, I find the peer-to-peer review process to be adventurous, collaborative, and not dissimilar from open source software projects on GitHub. In my software development life, these kind of peer-to-peer reviews can be incredibly helpful, though in recent years I have seen many conversations about gatekeeping and community standards for marginalized members of the tech community using such services and practices. It is good then, that peer-to-peer reviewing is happening at the same time that DH is undergoing a reflection about issues about who is being centered within the field, but always worth considering.

    1. Brianna Caszatt (she/her/hers) Post author

      Yes, I agree, I think of adventurous too! And I also find it very encouraging.

      I’m very curious to know more about what those gatekeeping and community standards conversations looked like. How do people exclude others from areas that are otherwise open access?

      It’s taken the medical publishing field years and years to begin to accept “open access” as a best practice, and when I was last in the field (about 2 years ago), “open access” usually translated to free to everyone after 6 months. The publishers still needed to justify their subscription rates to libraries and whatnot, as ultimately that is how they made their money, so most articles weren’t immediately free, though exceptions were made, e.g., during the Zika outbreak, and I’d assume now with any COVID-related research as well. And some journals were just beginning to experiment with a more open peer review process, in which the writers could see which reviewers were leaving which comments and respond directly. There was also great pushback against preprint services where people were publishing their studies on open access servers before they were officially peer reviewed and accepted for publication into a journal; like even if scientists were reading each others work in that way, a lot of journals weren’t allowing the authors to cite the preprint studies in their articles.

  2. Rachel Dixon (she/her)

    Open access has so many potential interpretations depending on the field! Thank you for these insights. I see some similarities and some differences to open source software development and distribution. Both creation and consumption can be open or closed, but historically open source software aimed to be open in both modalities. A good portion of the conversation I am familiar with is specifically around code reviews, and I am thinking most specifically about this Mozilla experiment: https://blog.mozilla.org/blog/2018/03/08/gender-bias-code-reviews/ and this linked Wired article: https://www.wired.com/2017/06/diversity-open-source-even-worse-tech-overall/

    While open code review may not be a 1:1 to peer review (I’m not experienced enough to say), creating a monoculture and norms for openness without consideration for how those norms may be challenging or challenged by those outside that culture, implies that there’s a limitation to the concept of “open.” Open source software is so essential to modern computing, and I hope that some of these experiments help the open community become better collaborators or build better collaboration tools. It is encouraging to see DH attempt to confront similar contradictions as they happen.

Comments are closed.