As a reviewer, selecting the right speaker for an event is a high-stakes task. The stronger the speaker, the more valuable the talk. On the other hand, a weak or off-message speaker can threaten the success of your event.
But the speaker selection process can’t afford to waste significant time or resources on finding the exact right fit. For the process to run as smoothly and effectively as possible, with the best outcome for your organization and event, the review needs to be free from ambiguity. It needs to be clear and streamlined.
From putting out a call for abstracts to managing the vetting process and selecting the speakers, there’s a long to-do list. Even after the speakers have been selected, reviewing teams need to collect bios, approve material, gather supporting documents and much more.
To help manage the process, we share some tips, practices, and strategies to help you better review speaker applications for any type of event. And with the use of digital tools like modern submission software, you can manage the entire process remotely from start to finish.
Identify and agree on all reviewing details
Before any reviewing begins, the first key action is to identify the exact details of what the review process will look like.
Here’s where it’s useful to specify who the reviewers are, how many rounds of feedback and edits you’re expected to carry out, if and how work can be re-assigned, and any other determining factors the process might include.
Gaining clarity on these details before the process begins helps everyone understand exactly what their role is so everyone’s on the same page. Those small but task-blocking confusions should get ironed out earlier on in the process, allowing the time and space to tackle those submissions.
Watch segments of previous presentations to assess a speaker’s ability to engage with listeners
When it comes to selecting the best speakers for an event, it’s important to factor in how listeners/viewers are engaging.
One key way to go about this is to source presenters directly from a speaker’s bureau, Director of GroupActive, Kathy Koehler tells us. With over 20 years of experience in planning corporate group events, Koehler shares that going through the traditional course, “allows you to peruse their catalog of speakers, get a quick snapshot of their talking points and listen to segments of their presentations,” virtually before making a decision.
For Koehler, this stage is all about getting a feel for the speaker’s personality and to assess their ability to engage with their listeners.
“It’s valuable to learn how the audience/group perceived the presentation,” Koehler explains.
Approaching the process in this way can help reviewers make a more thorough evaluation of a speaker’s suitability—and during a time where remote communications are ever-increasing, it’s important to assess how speakers are able to maintain different levels of engagement through a screen.
Don’t overwhelm reviewers—assign tasks quickly and easily
Whether we’re talking about speaker submissions or college applications, having enough reviewers is absolutely vital.
Taking on more than what’s realistic only leads to reviewers being overloaded and overwhelmed, potentially harming the quality of the evaluation.
With digital peer review, reviewers can hone in on the best applicants quickly—smart, intuitive portals show submissions and other necessary application materials and review notes. Submission software also makes it easy for multiple reviewers to collaborate, fill out rubrics, or add comments to their scoring.
With a straightforward, modern submission software like Submittable, it’s easy to automatically distribute submissions to other reviewers or organize your workload. Or, you can manually accept or assign tasks as and when they come in. Whichever rhythm you prefer to work on, submission software makes it easy to not do it all yourself.
Consider the volume of submissions when deciding the reviewer requirements
Let’s imagine a reviewer has only two or three speaker submissions to score. Completing a scoring sheet, providing feedback to the submitter and writing a quick summary of the submission might not seem like such a big deal.
But in reality, that’s hardly ever the case. Reviewers generally tend to be responsible for a long list of submissions, in which case time is of the essence. Any lengthy or complicated process here only takes away valuable time from reviewing more submissions and contributes to fatigue.
One way to simplify the reviewing experience is to compare the requirements for a bare-minimum review with those of an ideal review—then find the happy medium between them.
Playing around with your marking scheme (eg. balancing written commentary with short scoring sheets) can help you pinpoint the possible results within a realistic timeframe.
Another area worth revisiting is in the optional sections. Are there several optional sections for reviewers to complete? If so, it may be time to consider which are truly necessary and which can stand to be removed. Less time on added tasks, more time on reviewing.
Eliminate groupthink tendencies by using a collaborative tool
Using technology to streamline review processes can save time, reduce clutter and automate tedious tasks.
But the real value is in its ability to help you make better decisions.
Management consultant Mckinsey & Company describes the process of selecting speakers as cross-cutting decisions. In essence, it’s “a series of small, interconnected decisions made by different groups.”
That means a lot of collaboration is needed, and that can have a big impact on an organization.
If you’re using a submission software, you can send collaborating reviewers to a custom, branded portal, where they’ll log in and see what assignments they’ve been tasked to review. They can access all attachments in one place, make use of the automated reminders, and receive straightforward reports while seeing a direct view of the workflow.
You can define the objectives, scoring system, and criterion for all reviewers to see, with a birds-eye view of the entire speaker selection status.
With this transparency, you can avoid any unclear or vague discussions, individual preferences, or groupthink tendencies. It removes a lot of ambiguity from the process so that you and your team feel more confident in the final decisions.
Be sure to secure a diverse lineup of speakers
Although ensuring a diverse lineup comes into play before the actual reviewing begins, it’s one of the most crucial elements of a strong review.
Diversity in speakers isn’t a new conversation. In a study of over 60,000 presenters at events over the past five years, the number of female speakers and presenters was found to be just 31%.
No one wants to end up on a Tumblr post, at least for that.
Using submission software, consider setting up anonymous reviews during the selection process. Hiding applicants’ name, race, gender, and age can all help to reduce implicit bias. You can also create a ‘diversity dashboard’ once speakers are selected, so you can view accepted submissions based on a list of attributes.
While the selection process will have already been completed at this point, observing finalized submissions based on those attributes will help shine a light on whether selections are skewed toward certain groups or lacking in others.
Going remote—and still finding the best speaker submissions
Clear communication, concise decision-making, and a clear scoring process make up the lifeline of a quality speaker review. With so many tasks surrounding the main job of reviewing, it’s easy for the whole system to feel out of control.
But by leveraging technology and in particular, submissions software, you can hand off a big chunk of the load and spend more time scoring. Combined with stronger decision-making and a seamless line of communication, you can score higher-quality applications, while boosting your association’s prestige in the process.