Lukas Neville

Associate Professor of Organizational Behaviour, Asper School of Business, University of Manitoba

More Inclusive Admissions, Beyond the GRE Debate — October 16

More Inclusive Admissions, Beyond the GRE Debate

Over on Twitter, there’s been an interesting debate raging about the role of the GRE. The GRE, or Graduate Record of Examination, is a standardized test of verbal and quantitative ability that is frequently used by admissions committees selecting students for research degrees.

A number of schools are rethinking their use of this test (as you can read about by following the #GRExit hashtag on Twitter), and a heady debate has followed about whether these moves are equity-enhancing or equity-deteriorating.

What this conversation has raised, for me, is how poor a job many institutions do at removing bias and racism from the other parts of their admissions procedures. The employee recruitment and selection literatures tell us a great deal about how to do a better, less biased, and more inclusive job at hiring. My impression is that many admissions processes ignore these insights entirely.

If we are going to rely less on the GRE and other standardized tests, we need to do a much better job of debiasing the other ways we measure aptitude, potential and fit in grad admissions. My initial response, half joking, was that admissions committees without the GRE could just fall back on the gold standard of bias-free decision-making: The reference letters from profs! But, of course, after the jokes, the real question is: What changes to grad admissions norms would improve access, reduce the structural racism of the process, reduce bias in selection, etc.?

Here are four ideas. They’re all reasonably easy to do for committees (though for some of them, there’s work involved). More importantly, though, I don’t think most of them require major institutional policy changes. They’re things that members of an admissions committee could likely do on their own within existing procedural frameworks.

Let’s take the “secret code” out of statements of interest. For many undergraduates, their only experience with crafting a statement (if they have any experience at all with it) is probably with the undergraduate personal statement. So, many of us have probably seen statements of interest that start with stories about the student’s formative and childhood experiences and their ambitions in life, et cetera. And these aren’t what we’re looking for. What we’re seeking from these statements is basically a demonstration that the student knows how to ask a good research question, has a good fit with faculty member interests, and has a set of career goals that is consistent with the program. But at most schools, we don’t give a template. We don’t give examples. We don’t give a checklist of items to cover. So, when we read these statements, we’re often just capturing whether students know the “inside baseball” of what committees are hoping to see. And where do they learn this? By having elite social networks: Friends and families in grad school or academic careers, faculty mentors, et cetera. So, let’s take out the mystery: Make it clear what needs to be in these statements, and how the statements are used in making decisions.

Let’s do away with unstructured interviews. The apprenticeship and lab models of grad training often prioritizes the student-supervisor dyad as the most important fit. So, it’s natural that our first connections with prospective students are often done through casual one-on-one phone or video calls. It’s through those initial informal chats that we make decisions about whether a student is the right fit or match. And we make overall judgments based on our instinct and our accumulated experience. But let’s recognize this for what it is: It’s an unstructured interview. And we know that unstructured interviews are both poor predictors of performance, and tend to reward social and demographic similarity. And, our global, overall appraisals put more value than we might want on people’s culturally-specific social skills and verbal fluency. And, whether we want to admit it or not, unstructured interviews are a “beer test”: “Would I want to go out for beers with this person?” If we want to improve access and representation, we need to stop choosing students on their similarity to ourselves. The fix is easy: Use structured interviews. Have more than one interviewer. Choose your questions carefully, and ask all your applicants the same questions. Have each interviewer rate each applicant independently without conferring first. Use standardized rating scales that are anchored to specific behaviours you’re looking for. This won’t feel natural, but the research is clear: It’s how you stop selecting on social and demographic similiarity in interviewing.

Let’s think about adding work samples to selection processes. Many of us love undergraduate applicants with experience doing an undergraduate thesis, because it shows that they have experience and aptitude for the kind of independent research work they’ll be doing in grad school. But, of course, the chance to do hands-on independent research is much easier in some institutions than others. So, if we select on undergrad research experience, we’re selecting to some degree on elite institutions. So, let’s do what many employers do, and create opportunities in the selection process for work samples. What this might look like will vary, but here’s an example: Give students a simple, accessible introduction to a topic, and then have them identify research questions, develop or refine a research design, or do something else that helps you see their aptitude for ‘thinking like a researcher’. And, debias this process. As you would with interviews, make sure you have a consistent task, clearly defined criteria, and multiple independent raters. You may even have blind ratings, rating applicants’ work samples without their names or other identifiers.

Lastly, let’s reform how we ask for and use reference letters. This whole post started with a joking tweet about the uselessness of reference letters. And in their current state, they are: They’re a status-signalling device, and reward people with the cultural capital and institutional opportunities to build close relationships with tenured or TT faculty members with ‘name recognition’. The content of the letters may themselves reflect cultural idiosyncrasies. The rating scales (top 5%, 10%, 25%, etc.) can also be hard to interpret, because ratings may either be lenient (trying to write a “good letter” by giving a top-5% rating), or institution-specific (“hey, top 25% in a highly selective school is a great rating!”). A better approach is to request evaluations with rating scales that are anchored to specific categories of behaviours or frequencies of behaviours–and to think carefully about picking behaviours that actually matter to success in the program, rather than measures that reflect cultural styles or other less relevant criteria.

None of this should be seen as surprising or innovative. Any of us who teach any basics of selection in our classes will recognize these characteristics from any introductory HRM, OB, or I/O psych textbook. They’re not state-of-the-art science. They’re just basic, nuts-and-bolts, evidence-based approaches to effective and bias-free selection. But there are a lot of schools (even in places who study selection, discrimination, and related topics) that still use selection techniques and instruments that make the GRE look positively benign by comparison. So, if your school is dropping the GRE for equity and access reasons, or just dropping it because of COVID complications, now would be a great time to take a critical look at the rest of your selection process.

Is Forgiveness a Public Health Issue? — February 21

Is Forgiveness a Public Health Issue?

A lovely argument for the idea of forgiveness as a public health issue in the new American Journal of Public Health:

If forgiveness is strongly related to health, and being wronged is a common experience, and interventions, even do-it-yourself workbook interventions, are available and effective, then one might make the case that forgiveness is a public health issue.

I think one of the most important things that forgiveness researchers must emphasize, though, is that forgiveness does not mean condoning or justifying an act.  It does not require an absence of sanctions.  It does not free transgressors from the consequences of their actions.

What changes is the motive:  When we forgive, we don’t enact justice just to see a transgressor suffer.  Justice, including punishment, happens to provide amends, to restore equity, to communicate a message to the transgressor about the wrongness of their act, and to deter future transgressions.

If forgiveness is related to health (it is), if forgiveness is good for individuals, for relationships, and for organizations (it is), and if forgiveness can be learned (it can), then one of our first orders of business should be to show people a path towards forgiveness that does not require them to be doormats, and does not require them to temper their pursuits of justice.

Measuring Tradeoffs — December 7

Measuring Tradeoffs

Over at Maclean’s, there is an interview with Philippe Lagassé in which he offers a limited defence of Ottawa’s recent, ill-received public survey on electoral reform (

Lagassé makes the argument that this survey does succeed in that it makes explicit that electoral systems are about tradeoffs.  Ultimately, there are tensions between, for instance, genuine local representation and the predictability achieved with party discipline.   Lagassé scoffs at “the reaction…  that there are no trade-offs or trade-offs shouldn’t be presented in stark ways.”

But here’s the challenge I see, even if we start from the premise that it’s useful to help people think through the tradeoffs that shape their preferences for electoral systems.

The questions the survey asks are not designed in a way that elicits careful consideration about tradeoffs.  

Here’s an example, scored on a Likert scale from 1 (strongly disagree) to 5 (strongly agree): “A ballot should be easy to understand, even if it means voters have fewer options to express their preferences.” Later, it presents a similar question as a forced choice between two alternatives:  “Ballots should be as simple as possible so that everybody understands how to vote OR ballots should allow everybody to express their preferences in detail.  These questions do speak to a genuine tradeoff associated with different electoral systems.

One gripe with this kind of question (the most common one I’ve seen on Twitter) is that it fails to consider how well-designed electoral systems might reduce the steepness of these tradeoffs.  (I.e., there are good ballot designs that manage to allow for more expressed preferences at modest cost to simplicity and clarity).

But with the understanding that these tradeoffs exist, this survey does little to capture the ideal balancing between the considerations.  In the first question above, it sets it up as a gain in one area and a loss in the other (a gain in clarity, a loss in options).  There’s ample evidence about how gain and loss framing impacts judgment.  In the second question, it does address the tradeoffs a bit more fairly, but presents it as a stark, dichotomous either-or rather than a balancing act.

Compare those questions with the following:


MyDemocracy is meant to be about tradeoffs and priorities, but instead offers dichotomies or imbalanced comparisons between gains and losses.  Asking people to rank considerations, or to choose along a continuum between competing priorities (as in the question above), would allow people to give appropriate consideration to those tradeoffs.

The total lack of genuine tradeoff thinking is also present in the section on “priorities” on MyDemocracy, which asks you to choose which of several considerations are “most important” to you — without limit.  You can choose all of them!  What possible insights into one’s priorities might such a measure yield?

Lagassé suggests that MyDemocracy might be a success if we think of it less as a mechanism for gathering input on electoral systems, and more as an educational tool to “convince people to look at electoral reform in a different way or to understand what their values are, or to simply try to change the terms of the debate away from the systems and toward the consequences.”

I doubt this was the intention, by the way.  Cynically, I suspect the real intention here is to defend the status quo or create deadlock by highlighting the risks and costs of various alternatives, and to signal that one change (proportionality, barely discussed in the questions) might be accompanied by a raft of other changes (mandatory voting, online balloting, changes to the voting age).  If this was the intention, than this is a very, very fancy push poll.

But even if Lagassé is right, and even if this is a legitimate and well-intentioned attempt to have people think about tradeoffs and the consequences of electoral system design, it still fails.

At the end, the payoff for this thing is still an archetype — how you fit in a cluster.  (I’m an “innovator”, MyDemocracy helpfully informed me).  It doesn’t create a bridge to help you connect your values and intended outcomes to electoral systems that would reflect those values or achieve your preferred outcomes.   This could have been useful with Vote Compass (fitting people’s attitudes to a party label could help them with their voting decisions).  But here?  It’s useless.

It didn’t have to be.  It could have been educational, helping voters develop informed opinions about about how different electoral systems might reflect their values, or helping them learn how the design of electoral systems could help reduce the steepness of each of these tradeoffs.  Almost a decade ago in Ontario, a referendum was held on a mixed-member proportional representation system recommended by a citizens’ commission.  The citizens’ commission was a wonderful piece of deliberative democracy — but it fell short at the ballot box, in part because “…people didn’t know what the issue was. They didn’t know what mixed member proportional stood for, so [its defeat] was not a surprise.”  A tool like this could have been used to help build a more informed electorate, to ready voters to weigh proposed reforms.  But as it stands?  It offers nothing.  Investing time filling out the survey leaves the respondent no better equipped to understand MMP or STV or any other system, or how those systems might reflect their values, preferences and priorities.

“In Small We Trust” — August 5

“In Small We Trust”

Ever been on a team that’s too big to function properly?  Organizations often stuff teams, workgroups, and committees full of members.  Perhaps they’re hopeful that many hands will make light work.  Maybe they want to get more perspectives, skills and resources.  Or, more likely, it’s just easier to add one more seat to the table than risk the hurt feelings of leaving someone out.

Some interesting new work from researchers at the University of Queensland and University of Toronto suggests that trust may be harder to cultivate in larger teams.

The researchers had people rate the trustworthiness of hypothetical small (versus large) companies and towns.  People tended to think of smaller firms and people in smaller towns as more likely to behave in trustworthy ways.   They also had people imagine a small versus large crowd of people.  Again, people thought of people in small crowds as being more trustworthy.

The researchers also had people imagine being caught at work for a minor offense (like making recreational or personal use of a work laptop, or taking home stationery from work). They were asked to imagine being judged by a disciplinary panel that was either large or small:

Screen Shot 2016-08-04 at 11.01.58 PM.png

People seemed to intuit that the smaller group would be fairer, more trustworthy, and more lenient in their punishment.   The same effect was found when imagining a job application being reviewed by a small (versus large) committee:

Screen Shot 2016-08-04 at 11.06.57 PM.png

Job applicants thought the smaller group would be more trustworthy and would make a more favourable decision.  They felt the smaller group would be both warmer, but also more competent and capable.

The studies, of course, don’t get at real team dynamics.  They’re about our initial, gut-level associations between size and trust.  Here’s what I’d take away from this when planning work teams:

(1) Gut feelings can be a self-fulfilling prophecy.  If the researchers are right and there’s a deep-seated connection for people between group size and the trustworthiness of their members, this can matter.  If we enter into a new team and treat others as if they’re suspicious or ill-intentioned, it’s often reciprocated.  Distrust can spiral as people refuse to cooperate, delegate, or share information. Don’t expect that a large team, even if filled with very trustworthy individuals, will treat one another trustingly.

(2)  Aim for the “minimum-viable team size”.  Rather than starting with a laundry list of everyone who might be interested, useful or helpful, begin by asking:  What is the absolute smallest team of people who can bring what it takes to execute the task at hand?  Let this core group begin the work, and let them choose to add others as resources, skills or connections are needed.

(3) Make big teams feel small.  When working in large groups, consider breaking them into smaller subgroups, even at first, to capitalize on the mental “small=trustworthy” shortcut we seem to take.  You may also want to draw on comparisons:  Emphasize the smallness of a working group relative to the entire organization, or the smallness of a firm in relation to its competitors.  Or, simply think of size and trust as a tradeoff:  If you anticipate benefiting from a larger team, accept that more time and effort spent in initial trust-building is the price that needs to be paid.

Being Trusted Isn’t Easy — December 17

Being Trusted Isn’t Easy

From a new study of London bus drivers by Michael Baer and colleagues:

Screen Shot 2015-12-17 at 1.34.39 PM


What this diagram shows is a double-edged sword to feeling trusted at work.

When we’re trusted by our managers and others, it comes with high perceived expectations.  We feel the job requires more of us, and that we’re expected to do a great deal.  And, we worry. We worry that we won’t be able to live up to those expectations.  We worry about maintaining the reputation that earned us that trust in the first place.  These two things together result in emotional exhaustion (feeling spent and burnt out), reducing your job performance.  This is changed in complicated ways by pride:  Feeling trusted creates pride, which reduces emotional exhaustion. But it also amplifies the reputation maintenance concerns that increase exhaustion.

So what can we make of this result?  We know from previous research that feeling trusted can build self-esteem and lead people to go above and beyond in their work.  But this new study suggests that some of these gains might be eroded if those we trust end up preoccupied with worry over meeting ever-growing expectations.

So, having read this, here are two questions I’d ask myself as a manager:

  1.  Are you trusting people with tasks they actually feel confident in doing?  I would imagine that both perceived workload and reputation maintenance concerns would increase as people get handed tasks that aren’t really in their wheelhouse. You need to make sure that people feel as confident in their own abilities as you do.
  2. Are you overburdening your most trusted employees?  Some tasks require deep trust.  Others don’t.  As you add new tasks, are you subtracting others?  Baer and colleagues talk about the need for subtraction:  “As the acceptance of vulnerability brings additional responsibilities for a given employee, chores that could be allocated elsewhere (or eliminated altogether) could be subtracted.”
Trust and White Privilege — December 8

Trust and White Privilege

The so-called “rental economy” (or sharing economy, or collaborative economy, or whatever you’d like to call it) is fundamentally about sharing resources.  People can now directly rent out their cars (Getaround), seats in their cars (Uberpool), clothes (Kanzee), and even rooms or apartments (AirBnB).

There are, of course, lots of models where a firm owns the resource being shared:  Hotels own rooms, Zipcar owns cars, Rent The Runway owns dresses.  But the examples I mentioned earlier are different, because individuals own the resource they are sharing.  Sharing involves risk:  If you share a designer bag with a stranger on the internet, you’ve got to believe it’ll come back in good shape.  If you use Uberpool to arrange a carpool with strangers, you want to make sure they’re not going to be axe murderers.  And, of course, if you rent out your condo on AirBnB, you want to be reasonably assured that it won’t be trashed by methheads.

So what’s the answer?  Trust, right?  On AirBnB, you get lots of information about the people you’re renting to.  You can see a picture of them, reviews from previous hosts, and even figure out if you have shared connections (perhaps you attended the same school, or have friends in common).  AirBnB advertises this to both hosts and guests:

Screen Shot 2014-12-08 at 10.23.05 AMScreen Shot 2014-12-08 at 10.18.26 AM

It’s not really like a friendly-looking picture or a shared college protects means that someone isn’t a thief or a methhead.  But it gives us the confidence to proceed with a bit of risk.  And, trust tends to pay off.  We trust someone based on relatively silly things like a picture or a connection to a friend of a friend, and 99% of the time, that trust pays off.  It allows us to cooperate and do things that are risky but mutually beneficial (like sharing a condo with a stranger!)

But here’s the problem with the rental economy:  It hinges on trust, and trust hinges on homophily.  We trust others who are like us.  We trust those whose faces are similar to our own.  And, we trust those who belong to the same racial group as us:

Screen Shot 2014-12-08 at 10.49.46 AM

So where does that lead?  It leads to Tressie McMillan Cottom‘s experience with AirBnB, shared today on Twitter:

Screen Shot 2014-12-08 at 10.52.46 AM

Screen Shot 2014-12-08 at 10.52.58 AM

Screen Shot 2014-12-08 at 10.53.06 AM

I don’t know whether AirBnB hosts use ethnic and racial similarity to guide their decisions whether to accept guests, but evidence from hiring and housing certainly suggests they will.  I would bet any amount of money that a randomized trial with Tressie’s photo versus a “stock photo of a white lady” would yield differences in reservation acceptance rates.*

Given this concern, I would take away two points.  First, of course, it’s important to note that the “sharing economy”/”rental economy” has added a new item for white folks’ invisible knapsack of privilege:  “I never have to worry if my profile picture will lead AirBnB hosts to refuse my reservation.”

But the broader issue I see is this:  The real challenge for the social economy is to create means of building trust between dissimilar people.  For rental and sharing systems to not simply reinforce and deepen homophilic trust bonds, providers like AirBnB need to build systems that create intergroup trust and help sharers trust those who are not like them.  In some ways, the sharing economy could be a place where intergroup contact happens, and we know that contact reduces prejudice.  But if trust mechanisms in the sharing economy boil down to profile pictures and social network graphs, I think it’s fair to expect the sharing economy to reproduce and exacerbate racial segregation.


* Edit:  Seems my hunch was right!  From a new working paper from Ben Edelman and colleagues:  “In a field experiment on Airbnb, we find that requests from guests with distinctively African-American names are roughly 16% less likely to be accepted than identical guests with distinctively White names. The difference persists whether the host is African American or White, male or female. The difference also persists whether the host shares the property with the guest or not, and whether the property is cheap or expensive.”

Does Commuting Reduce Political Participation? — August 30

Does Commuting Reduce Political Participation?

Is your commute to work sapping your will to be a good citizen?

A new paper in American Politics Research (gated; ungated) by three researchers at the University of Connecticut suggests this could be the case.

They find that those with long commutes to work are less likely to participate in politics. Voting, contacting a government official or elected representative, campaigning, signing a petition, giving money to a political organization or volunteering for an organization or campaign — the longer your commute to work, the less likely you are to be an engaged citizen.

This isn’t only a matter of not having time, since hours spent at work doesn’t change participation.  And it’s not only a matter of living in a politically-apathetic ‘bedroom community’ or commuter-filled suburb, since commuting time still matters even when controlling for various community characteristics.

So why does commuting make us less politically active?  Drawing on ego-depletion research from psychology, the authors frame commuting as a “daily grind” of stressors that reduce the resources and energy needed to actively engage in politics.  Consistent with a resources perspective, the authors find that having income helps buffer the adverse effect of a long commute.

The paper does an admirable job, given a limited dataset, of trying to rule out the explanation that the apathetic opt for long commutes, rather than long commutes making us apathetic.  The jury is probably still out on causality here, but this paper flags an interesting and troubling dynamic:

“The findings from this article suggest that lower income commuters, while perhaps in high need of upping their level of interest advocacy… will be less likely to do so because their current situation has left them depleted of key resources needed for such action.”

Airbnb learns the hard way: Assurance erodes trust. — June 3

Airbnb learns the hard way: Assurance erodes trust.

Airbnb is a service that allows people to open their homes to houseguests.  It’s turned couchsurfing into a high-growth business model that has hoteliers scrambling and investors looking forward to an IPO.

But there’s a challenge for Airbnb:  With people opening their doors to total strangers, how do you keep everyone safe?

A couple years ago, an Airbnb user had their house ransacked by thieves, while another found out the hard way that the guests renting their home were meth heads.

For Airbnb to scale up and succeed, they needed to make protect guests from unscrupulous hosts, and hosts from larcenous or destructive guests.

Things began well, with Airbnb extending a million-dollar insurance policy to its hosts.

And then they focused on building trust by linking Airbnb to social networks (Facebook, Linkedin) so that Airbnb accounts are linked to people’s real identities:

“Trust is the key to our community.  There is no place for anonymity in a trusted community. That’s why we’re dedicated to providing our users with the best decision-making tools possible…

We believe that the right technology can help lay the foundation for trust in other people.   Today, we are proud to introduce Airbnb Verified ID—the next step for trust at Airbnb.  Verified ID provides a connection between the online and offline spaces.”

Sounds good, right?

Except when “building trust” means asking for unnecessary assurance.

Doc Searls quotes an Airbnb member for whom the “next step for trust” meant feeling deeply distrusted:

“The new verification process is insane and insulting. I have used your service for two years. My “reality” has been verified by my hosts and my guests: people in four countries have left feedback about their experiences with me. We have talked on the phone. You have my social security number from when you sent me tax documents. You have my credit card on file. I’m happy to send you my drivers license, but don’t see why you would need it, when you already have the rest. There is just no way I’m linking up my facebook account so you can datamine my friends, keep an eye on my day to day activity, or examine my relationships. There are enough safety checks on me through the relationship we’ve already developed. Please reconsider this stupidity.”

This for me captures a main problem with so many of the approaches to online trust:   Trust is not assurance.  Trust is the expectation that you won’t have to use the million-dollar insurance policy, not the assurance that you’ll be protected if the guest is a thief.  When you ask for layers of assurance on a relationship that was already trusting, you risk backlash:  We’ve done business together!  Why do you suddenly need more ID from me?

If you don’t buy that assurance crowds out trust, try this experiment at home:  Approach your spouse and ask them to sign a post-nuptial agreement.  See how well the argument that you’re just “trying to build trust” goes for you.

One of the keys to maintaining trust is to rely on that trust.  When you replace trust with assurance (asking your long-trusted spouse for a contract you shouldn’t need; asking a long-trusted customer for identification that you shouldn’t need), you crowd out trust.  Airbnb seems to be learning this basic lesson the hard way.

Trust and a Hundred Million Doritos Tacos — May 1

Trust and a Hundred Million Doritos Tacos

This term, I asked my students on their exam to give me one strong argument about the benefits of trust and the costs of distrust.

Nobody mentioned the Doritos Loco Taco.

But the Doritos-shelled taco, that neon-orange fast-food abomination, offers a vivid example of the power of incomplete contracts and trust in allowing cooperation to occur.

Frito-Lay and Taco Bell collaborated for years in developing the technology that would ultimately produce this nacho-taco amalgam, and did so without formal contracting in place:

“While buzz for the DLT’s national launch was locked in, a deal between Taco Bell and Frito-Lay was not. As Taco Bell legend has it, though the companies had spent years working together on the DLT, no official contracts had ever been signed. Taco Bell’s 50th birthday was fast approaching when Greg Creed and then-Frito-Lay CEO Al Carey met in Creed’s office to hash out final details. “We both realized that if we let the lawyers get involved, this thing would get slowed down and bogged down. So we did a handshake deal–that’s all we had: You’re going to spend the money, and I’m going to spend the money [on the DLT],” Creed recalls. “Everyone was like, ‘You can’t launch without a contract.’ And we were like, ‘Just watch us.'”

The upshot of this approach?

“When we met in my office [before launch], we said that if either one of us gets sacked or promoted, we would actually have to write a contract,” Creed recalls. “When [then-Frito-Lay CEO] Al [Carey] got promoted to run the PepsiCo beverage business, I phoned him up and said, ‘So I guess we better write that contract then.’ Well guess what? We sold 100 million tacos in the first 70 days. If we waited for those contracts to be finished, we would’ve sold 100 million less.”


So, when someone next asks you:  What’s responsible for America’s obesity epidemic?  You can now confidently answer:  Incomplete contracts and trust.

Cutting out the Publishers? — April 3

Cutting out the Publishers?

“Imagine this thought experiment:

The entire editorial board of Journal X decides to quit and start a new open-source journal. Any expenses of that new journal could be funded by a university. Overnight, the new journal would BE, for all practical purposes, the exact same journal (with a new name) – at least in terms of what we should primarily care about, which is the quality of the research. Would it be that expensive to get such a new journal listed so that it appears on Google Scholar? Where are all these valuable marketing costs that supposedly exist? Seth Spain pointed out that this thought experiment actually happened in mathematics. The board of the journal Topology resigned and founded the Journal of Topology.”

— Marc Anderson on the OB-list, mulling over alternatives to  OBHDP’s $1,800 fee to make a paper open-access.

I would love to see OBHDP and other outlets reinvent themselves as online open journals, keeping the same ed board, AEs, reviewers, standards, publication schedule, etc., and simply cutting Sage, Elsevier and the other publishing giants out entirely.

I suspect that one of the biggest resources such rebellions would require is a simple, open-source, free to use piece of web software to duplicate the functions of ManuscriptCentral:  Submitting papers, assigning them to AEs and reviewers, tracking reviews, sending out decisions, submitting R&Rs, etc.

It would be great to see a large capacity-building grant (from CFI here in Canada, for instance, or the US NSF’s academic research infrastructure program) to build the basic submission-management and publishing tools necessary for journals’ boards to free themselves from publishers.

I can’t imagine a more profoundly and positively disruptive project for academic research than a piece of ‘journal-in-a-box’ software that would allow societies to escape their publishers and cheaply and easily take their existing journals open-access.

Update:  Spoke too soon!   There’s an OS journal workflow management tool already available (thanks, @mekki and @TIMReview).  It’s called OJS, Open Journal Systems.  Maybe there are tech issues inhibiting uptake (Journal of Management and Organization used OJS, but then returned to ManuscriptCentral), but I think the availability of this software probably undermines my technological-barriers argument about why ed boards don’t rise up and overthrow their publishers.  Next guess, anyone?