Editor’s Note: As part of our Conversation Starter feature, we present what we hope is the first part of an ongoing discussion about the best way to test an editor’s skills before hiring.
The author here presents a number of thoughts and ideas, and wants to hear what others think about them, to know if you’ve had similar or counter experiences, and to learn about what other innovative measures you might be taking to solve the issue.
Want to join the conversation? Email your thoughts to editor@stc-techedit.org, or write them in the Comments section at the end of this article. We’ll publish them in future issues.
We’ve got a problem in tech.
We require a lot of our editors, more so than is usual in other disciplines.
The editors who work in tech need to be clear and logical thinkers, but also nimble and creative. They often work in several content types, for various audiences. They may be thinking a brand-new project through from the beginning or evaluating one at an early milestone. They often participate in developing the very guidelines that they and the writers will be following. They may also develop templates, models, annotated samples, or other resources. When it comes to the editing itself, they may be called upon to restructure or rewrite, to tweak syntax or diction or tone, or simply to sort out the caps and sweep the commas into place. In short, tech editors are many types of editor in one. Nor are they told which role to play when, but most typically must themselves analyze the writing, judge what is needed, and determine how best to accomplish those tasks (insofar as is possible) in the given timeframe. Then, whatever the level of edit, as there will typically be no one else to do so, they’ll also be proofing their own work.
That’s some spectrum. How does one test for all of this beforehand?
While there are scores of publishing-type editing tests to choose from, these focus on mechanical style, testing the rote minutiae of a particular style guide, typically The Chicago Manual of Style. Such tests work for publishing houses, where copyediting is generally applied in a completely independent cycle, at a defined stage in the overall editing schedule, by editors who only copyedit. But these tests do not work well for our environments, where “copyediting” is generally not distinguished from the other editing tasks, not handled in an entirely separate stage, not done by editors who exclusively copyedit. Nor would such tests necessarily be useful even for assessing the copyediting skills of a technical editor, as in our work much of Chicago (or any other standard style guide) doesn’t apply and an entire complex of other, highly specific terminology and concerns do. That’s why we end up with such detailed in-house writing and style guides.
The standard editing tests are thus wholly inadequate for our purposes. Such tests don’t assess for the copyediting skills we need: someone with very sharp eyes and detailed knowledge of the sorts of issues that crop up in our environments might still not do very well on a test designed for publishing-industry concerns. And more importantly, such tests don’t screen for the deeper, more fundamental skills that tech editors apply daily. The ability to see in a particular context, sometimes a relatively new context, ways in which what’s written is not inherently consistent, logical, or comprehensive. The ability to see where a particular document or set of topics or UI text is not organized or developed well for a particular audience or purpose. The ability to articulate issues and potential solutions clearly to the writer and other stakeholders, sometimes also to rally them to the cause. And a range of other tasks. Tech editors move fluidly between skills traditionally associated with developmental editing, line editing, and copyediting, with a little acquisitions editing thrown in. As testimony to that work, our in-house style guides travel well beyond issues of mechanics to canvass central characteristics of structure and content, sometimes for different environments, distinct purposes, varied audiences.
Ah, you say, but we’re hiring a contractor and we’ll be going through an agency. This turns out not to be much of a boon. All those agencies typically do is collect resumes. They look for resumes that list the very same skills you’ve included in the job description, but they’re not confirming those skills. That’s up to you. In this sense, an agency recruiter functions the way an HR recruiter does: simply applying the first filter, making sure that you’re looking at applicants who look good on paper.
Unfortunately, that’s an uncertain measure.
To vet editor candidates, it seems that tech departments must look instead to developing their own tests. But developing tests, and evaluating them, is itself a specialized skill. Not to mention time-consuming. What can we do to ensure that our tests are successfully filtering for the skills we need? And how can we streamline this time-intensive process? This has been the focus of the conversation, in recent years, of many of the groups I’ve been with. But could it be that in this hunt for ever better, ever more revealing, ever more predictive tests, we’re becoming ever more entangled in clearing the wrong path?
When I was teaching editing and writing classes, I got to know a potential editor’s capabilities and approach in the natural course of things. Assignments, in-class discussion, quizzes, tests — together, these served to paint a complete portrait. And similarly, whenever first working on the job with a new hire, I typically get to know that person’s capabilities and work style within a few short weeks.
Can we design tests, or indeed a hiring process, that adequately convey this same sort of information? That tell us how an editor will work with other editors, with writers, with subject matter experts? That tell us how sharp those eyes are, across a range of issues, document after document? That tell us how flexible her sensibilities are, how deep her knowledge, how able her explanations? That tell us the rhythm she’ll settle into, how she’ll juggle competing priorities? Or should we acknowledge that nothing we can discover in a test, a couple of tests, or even a series of interviews, no matter how clever the questions, can give us the depth and richness of information that actually working with someone does.
Should we think, that is, about revamping our process altogether?
Should we acknowledge that nothing beats a full-scale assignment — not the smaller-scoped test assignments we devise, not the samples we ask to see — for telling us whether an individual is a good match for a role or not. And when you come right down to it, that nothing beats working alongside that person for two to three weeks, day in and day out, on a series of assignments.
This is my question. Should we follow the lead of the publishing industry and pay candidates to work for a short, probationary period, while we assess the fit with the only test that will really tell us what we need to know: working with someone. So as to better understand in a deep and nuanced way how a person actually works and what, in her writing and editing, she is capable of — before actually hiring her.
I’d like to start a conversation. Who’s in?
To see more of Odile’s thoughts on this subject, go to her blog: https://palimpsestediting.com/the-editors-notebook/editors-and-editing-2/how-can-we-better-determine-an-editors-skill-before-hiring
From Scott Abel:
Thank you for taking the time to think through what’s required to determine an editor’s skill before hiring. It’s a big topic that’s often hard to wrap our heads around.
I’d like to encourage technical documentation management to also include criteria for selecting editors based on what a job demands and whether a candidate’s *instinctive approach* is likely to make them successful — or not.
Selecting the best candidates involves more than determining how the job needs to be done and whether a candidate has the requisite *skills*. Selecting the right team members often gets mired in all sorts of time-wasting exercises in which unqualified people (whose expertise is not in hiring talent) attempt to determine what criteria we should use to determine whether a candidate possesses the *right stuff* for the job at hand.
Unfortunately, relying on folks with no expertise in understanding why employees succeed (or fail) to set criteria for determining skills usually results in non-optimal hiring decisions, leading to poor results. It’s no wonder that employee loyalty is at an all-time low and that many tech comm teams report difficulty retaining existing personnel and find right-fit replacements.
My response to Scott (posted originally on LinkedIn):
No, it’s true that skills alone do not the proper fit make. But my role in the hiring process has most often centered on editorial skill — insofar as I can suss it out with test, samples, and interview.
In my original post (this article here in Corrigo, above), I noted that when I was teaching, I go to know a potential editor’s capabilities *and approach* (so: teamwork, etc.) over the natural course of things, as the class progressed. Just as whenever first working with a new hire, I typically get to know that person’s capabilities and work style within a few short weeks.
As would any of us in those same situations.
I was wondering, why could we not change up our hiring processes to incorporate a kind of trial run? At the very least, an actual editing assignment — an extensive assignment — that would reveal ever so much more about that person’s skills and style of working than we can ever rightly predict with the standard array of tests and interview questions and so on.
The editorial skill we require of our editors is breathtaking. However can we test for all this beforehand? Added to that, of course, though this is not a question I was pursuing, are all the other skills and characteristics to be desired in someone to make them a great fit not only for the work, but for the team.
I focused in on the nature of editorial skill in this second article (the one Yoel links to in the comments) because I could see from some of the responses to that original article that I should make more tangible what I meant by “editorial skill.” So that persons new to editing, editors new to tech, editors or writers working outside of tech, and other stakeholders or interested parties would have a better idea of what I meant by saying that one test cannot give us that information. One very long editing assignment — an actual assignment — could go a long way towards that. And by “editorial skill,” I should note, I mean not only the grammar and language skills, not only the structural skills, the analytical skills, but some of the project and people skills as well.
I am not claiming that editorial skill alone determines the best editor for a given role. But to assess the skill level of a given editor candidate, I’m saying that we would do better to work through a paid assignment with that person.
Bonus: We’d get a lot of that other info on working style as well.
From Scott Abel:
Assuming this thread aims to help technical communication teams find the right talent…here’s my two cents.
I’m less concerned about their editing skill and more concerned about their conative wiring. Selecting a candidate with amazing editing skills does nothing to prevent projects from going off-track, deadlines being missed, or team members failing to collaborate. And, it doesn’t help you spot the person who is not going to be able to meet deadlines, collaborate with others, work quickly, or implement a solution.
What we need to understand is how each individual team member is wired. Great job fit means achieving two goals — creating jobs that serve the needs of the organization and getting people into roles that fit their instinctive strengths. Testing their editing skills misses the mark (for many reasons) unless it’s accompanied by more useful metrics.
From Anna Erickson:
Agreed, an editing test is not enough to find the right candidate. To echo Scott, simply testing as an adequate copy editor is not sufficient, as you cannot determine the candidate’s self-motivation, ability and/or willingness to learn new concepts, and critical-thinking skills, among other things. I don’t have the perfect answer for the right way to assess the other essential skills needed for an editorial position, but some ideas include tests that better measure critical-thinking skills, a greater eye to vetting the candidate through references (if an external candidate), a thorough discussion about their past work history (especially if they have not stayed in one position for any length of time or there are other red flags), and, if at all possible, past work product. At the end of the day, hiring someone you don’t have experience working with is always a gamble. Sometimes it pans out and sometimes it doesn’t. Good luck.
Very true – and that’s the point of the article.
What do you want to test for in an editing test? Spelling? Grammar? That seems too basic, like simple proofreading. But how do you test for deeper editing skills? For structure/logic/flow, thought processes, ability to convince writers to change what they’ve written, reader advocacy, and so on? Maybe an editing test isn’t the correct approach at all, and we need to find a different way?
Reply from Anna Erickson:
Yes, beyond just spelling and grammar most definitely.
One idea is to give them test material that requires comprehensive technical editing to assess their broader editorial skills. This way, you may gain more insight into their thought processes and ability to structure content logically and in a reader-friendly manner. It goes beyond that to personality and work ethic, I think. How do you assess they are a self-starter? That they possess the necessary business acumen and are assertive in the way they work and communicate? That they are technically astute? These attributes are integral to a good employee and go beyond just editorial skills. So, while I think an editing test is necessary for an editorial role, perhaps the overall interview process should include testing in other areas: technology, personality, behavioral, etc.?
In response to Anna’s comment beginning “Yes, beyond just spelling and grammar most definitely,” in which she poses a number of other criteria, and possibly also testing in other areas such as technology, personality, behavioral:
Or actually working with them to find out.
If we changed our process to include a kind of trial run, all of these questions and more would be answered.
We’d no longer be looking for ways to better predict. We’d know.
In response to Anna’s comment that an editing test is not enough to find the right candidate, in which she says that “At the end of the day, hiring someone you don’t have experience working with is always a gamble”:
So, let’s not hire them anymore when we haven’t experience working with them! Let’s build that experience into the process by running through a paid assignment or two with the most promising candidates.
That would give us the information we need. We would learn what it’s like to work with that person by directly working with them.
It would mean a change of process for us, but the result would be better hiring choices. Don’t you think?
In response to Scott’s comment on this article, on finding the right talent:
Again, working through a paid assignment or two would yield that sort of information — in a way that a “test” never will.
So my ultimate question is, why don’t we adopt this method as a means of assessing candidates? Why don’t we actually work with them, in a kind of trial run, to find out all those illuminating things about how they actually work?
My point about the breadth and depth of editing skill was that we cannot hope to discover all this in one test, or in samples or an interview. Add in all of these other metrics, and that’s all the more reason to stop devising ever more elaborate schemes to “test” for these qualities and characteristics in favor of simply finding out.
That would give us direct access to the best talent.
Give the editor an article and see how many edits he or she can make.
For example, this article violates several grammatical rules such as…
1. Never end a sentence with “with”.
2. Never start a sentence with “But”.
That’s the point of this article – what do you want to test for in an editing test? Spelling? Grammar? That seems too basic, like simple proofreading.
But how do you test for deeper editing skills? For structure/logic/flow, thought processes, ability to convince writers to change what they’ve written, reader advocacy, and so on? Maybe an editing test isn’t the correct approach at all, and we need to find a different way?
BTW, the two examples you give of “grammatical rules” are actually “nonrules” – they are both perfectly acceptable (if not overused).
This is a perfect example of this article’s point – we need to look beyond the superficial and arcane and look for editing skills on a deeper level.
These are actually nonrules — also known as faux or zombie rules.
If you search on the phrase “grammar superstitions” or “grammar zombie rules,” you’ll see a veritable cornucopia of articles on these points. Most writing texts also address these often-repeated nonrules, so as to assure writers that they need not bother with them, and a few books have been written solely to explore the history of such prescriptivist poppycock, all of which have nothing to do with the actual grammar of the English language and most of which have nothing to do with fine writing either.
Reading where and how such superstitions came into being is instructive.
I usually give potential technical editors a sample of our content to edit. I expect to see the obvious grammar, spelling, and punctuation edits. But the most telling aspect of this exercise is the questions they ask. I encourage them to write their questions in the margins. Their questions give some insight into their thought process and level of editing experience.
Michele, I agree that the questions a candidate asks are very telling indeed! I also encourage questions, questions they would ask of the writer or the SME, as well as any questions they have for me.
But still, no one test will give me as much information as an actual assignment — which, realistically, we’d have to pay them to undertake. I’d like to see a fully edited doc back from them, at the very least. Even better: two or three.
If we paid them to undertake this work, this would not be an unreasonable ask. (As it would be, for example, as an unpaid test.) And it would be money well spent. An investment in finding the right editor, the right fit.
One lesson I’ve learned is that the meaning of the term “editing” is subjective. So if I’m asked to edit something, my immediate response is to ask the requestor to define their criteria for successful editing.
In one instance I tried to edit what was being called a white paper: The structure and writing were so abysmal that no concept of “editing” could fix it. Part of the problem was that the contributors to the so-called white paper had advanced academic degrees. So they weren’t especially open to suggestions. This experience was where I learned to clearly define what the client meant by editing.
In another instance, “editing” meant applying some basic technical communications techniques to blog posts written by technical consultants. The primary directive was to “preserve the author’s voice”. So for the most part I inserted some H2 headings, moved some sections’ conclusions to the beginning of the section, and tidied up a few terminological and grammatical issues. There were issues that conflicted with my standards for structure and clarity, but I set those standards aside in pursuit of maintaining a given author’s style.
So my thought is to define what you want from an editor. A very good editor of blog posts may not be the right person for an academic “white paper”.
Note also that my experience has been that editing is iterative. A test may identify the presence of absence of some aspects of editing but won’t provide much insight into an editor’s ability to incrementally move things in a desired direction…
Riley, you are spot on, on both points. The term “editing” is used to mean so many different types of reviews and interventions that it has almost become meaningless.
When I develop a test, or place an assignment, I spell out as fully as I can the sort of work we’re looking for. And I encourage questions.
As for the iterative nature of editing, yes, yes, yes. In this, it mirrors writing.
We give candidates an editing test to complete that takes less than an hour. It’s crammed with lots of obvious mistakes, so I assumed it would be too easy to ace, but the variations in skill levels that this test reveals are surprising. You can also tell something of a candidate’s personality by the tone of the comments they leave. Some people are abrupt and almost confrontational; others are apologetic and cushion their comments with too much fluff. This test still doesn’t reveal who will be agreeable to work with–that much is a roll of the dice. But for basic copy editing skills, it works.
Christina,
Ah, but I’m looking for ever so much more than basic copyediting skills, and that’s the point. If I were looking to test only for simple copyediting, the task would be easy. But I’m not. I’m looking for deep editorial acumen and skill.
I’ve never found a way to adequately assess for the depth and breadth of what I’m looking for within the confines of a test, with its limitations most particularly on the time we can expect a candidate to give away. Reviewing samples helps. Extended interviews help.
But I still maintain that nothing we can do in that context — the test, the samples, the interviews — can touch the sort of information we’d get back from requiring instead an edit of a full (and let’s say, rather lengthy) doc. Or two. Yes, we’d have to pay the candidate for their time, but it would be money well invested. Bringing in the right person from the start saves time and money.