Technology is becoming more involved in the writing and editing processes. Are we prepared to stop it when it oversteps its boundaries?
Computer-originating translation errors are an increasingly common obstacle in everyday media. Global internet access means that more and more people are learning informally in second languages or using translation software. As we become reliant on software and programs for our language needs, computers can introduce errors and correct things that don’t need fixing. Programs like spell-check don’t have full capacity to consider context, and that’s doubly true in specialized communication, such as automated translations and video captions.
In “‘Tortured Phrases’ in Post-Publication Peer Review of Materials, Computer and Engineering Sciences Reveal Linguistic-Related Editing Problems,” Jaime A. Texeira da Silva (2022) reviews cases where heavy-handed editing software made questionable lexical changes in scientific research papers—changes that made it through to print. These “tortured phrases” have gone through the technological wringer as plagiarism checkers and textual similarity analyzers have introduced unnecessary and even misinformative changes to their phrasing. It can be humorous in a conventional English phrase, but for a mathematical or medical term it can be seriously problematic.
Texeira da Silva presents a total of 35 such errors that were found by peer readers in technological and medical research articles post-publication, including terms like “voice recognition,” which became “discourse acknowledgement”; “thermal stress,” which became “warm anxiety”; and “malicious parties,” which became “compromising get-togethers” (Texeira da Silva 2022, 3). Simply introducing a synonym or rephrasing a long term is not so simple. These terms were interpreted by plagiarism checkers as the parts that they comprised, not as the whole units of meaning that they were. As a result, the specificity they provided was lost and their intended message was made unclear. Texeira da Silva discusses that the likely culprits are textual similarity analyzers, translation software, automated paraphrasing tools, and likely some form of all in combination. While the use of such tools is not inherently wrong, it is the responsibility of editors and reviewers to spot the errors that software programs introduce before the final work is published.
While the examples Texeira da Silva discusses here are specific to a few industries, this research has potential application for a variety of everyday scenarios as machine translation grows in use. As Texeira da Silva explains, “Such adjustments [of common phrases] would introduce fatal linguistic errors, ultimately reducing comprehension by the reader” (2022, 2). At worst, these tortured terms can cause grave errors; in the everyday they still cause misunderstandings and impede effective communication. The application for editors is far more obvious, as editors of all kinds should be aware of these potential problems as they work. Our abilities to spot these kinds of issues in writing are exactly what makes a human editor better than a coded one.
To learn more about computer translation errors, read the full article:
Texeira da Silva, Jaime A. 2022. “‘Tortured Phrases’ in Post-Publication Peer Review of Materials, Computer and Engineering Sciences Reveal Linguistic-Related Editing Problems.” Publishing Research 1: 1–6. https://doi.org/10.48130/PR-2022-0006.
—MJ Christensen, Editing Research
FEATURE IMAGE BY 愚木混株 cdd20
Find more research
Read Maddy Abadillo’s Editing Research article to learn more about how the technical editing process can fail: “Why Some Review and Revision Processes Are Unsuccessful.”
For more about the efficacy of editing software, read Simon Laraway’s Editing Research article “Should Editors Use Grammarly?”