Ethical principles | Greenhouse

Greenhouse’s commitment

Our ethical principles

The guiding principles we use for evaluating new and existing features or products – and in making key business decisions related to the data we’re entrusted with.
Two coworkers reviewing content on a computer in warmly lit office

Greenhouse is committed to being an ethical company

Companies around the globe have benefited from the use of technology to aid in hiring great employees. Automation can greatly reduce the load of administrative tasks in the hiring process. Data analysis can help companies refine and improve their recruiting processes. Machine learning (ML) and artificial intelligence (AI) can assist employers in making better hiring decisions more efficiently.

But there are also great risks with using these tools. Automation can encourage discriminatory practices. Data analysis can potentially expose private information to the rest of the world. Machine learning can hide and even reinforce existing biases.

As a leader in hiring software, Greenhouse, the company and its employees, has a responsibility to our customers and to the millions of candidates who apply for work using our products. To that end, Greenhouse’s Data, Product and Privacy Ethics Committee has formed a set of ethical principles to serve as our North Star when we evaluate potential new products and features to develop, and to inform any business decisions that we make related to how we handle candidate and customer data.

Our hope is to spark a conversation across our industry and among businesses as a whole. The decision to hire or not hire someone can have a huge impact on real human beings, and we are committed to making sure that our technology supports and enables human potential. We will continue to evaluate our tools and methods at the forefront of this continually evolving space to ensure we are making the best possible decisions in our work.

Our principles

General

We will always consider how a system can harm people within the hiring process

When examining a new product or feature, it’s very easy to imagine the benefits that it can provide. It’s often harder to consider how a system can harm individuals or groups of people. For example, poorly trained facial recognition software used by the police has led to innocent people being wrongfully accused. There are also examples of resume services that were shown to favor men over women. It is impossible to guarantee that a tech product is unbiased, so we need to have a plan for what happens when one produces an undesired outcome. Our systems must also mitigate against bias inevitably creeping into the system, they must be transparent yet confidential and they must promote equity and inclusion.

In this committee, we aim to provide clarification on the certain types of biases we are assessing within our ecosystem (selection bias, confirmation bias, reporting bias, flywheel effect, implicit/unconscious biases, etc.). We are creating this committee to build a more equitable framework that will identify potential biases within our product suite and prevent them from detracting from our overall purpose: to enable companies to hire the best teams.


Accessibility

We develop our product to be accessible to everyone, regardless of technology or ability

We strive to adhere to the Web Content Accessibility Guidelines (WCAG) 2.1, aiming to make our product user-friendly and accessible to individuals with disabilities.

Our team continuously improves our website's accessibility through audits, user testing and collaboration with trusted accessibility consultants. This process is ongoing and some areas may not yet be fully compliant. We appreciate your support as we work to make our website accessible to all.

This accessibility statement was last updated on Jan 4, 2024. If you encounter any barriers or have suggestions for improvement, please contact our Technical Support team.


Privacy & security

We secure candidate data as well as customer data

When a candidate applies for a job with one of our customers, they are required to provide personal information that may be confidential. Greenhouse will continue to architect appropriate security and data privacy solutions, carefully weighing the concerns of both sensitive candidate data and sensitive customer data.


We do not share candidate evaluations across customers

Given Greenhouse’s diversity of customers across sizes, industries, geographies and more, it is tempting to allow our customers to share information with one other about candidates they have interviewed. But this could transform Greenhouse into something akin to a credit agency, wielding a disproportionate and unjustified level of power over a candidate’s future job prospects, and could cause massive harm to both candidates and customers.

We will continue to prioritize offering our customers the tools they need to evaluate the candidate sitting in front of them today, not inserting someone else’s judgment of the candidate, without context, based on a discrete interaction that occurred in the past. Where sharing information may improve the hiring process for both candidates and customers, we will first seek the consent of the candidate.


Automation

We use consistency to create fairness and accountability

We use automation first and foremost to make hiring processes more consistent, because companies with consistent processes are more accountable, inclusive and offer a better candidate experience. When pursuing efficiency, we will consider whether that efficiency supports or detracts from accountability, inclusiveness and candidate experience.


We encourage people to make the consequential decisions

There are hundreds of decisions that have to be made during the hiring process, including considerations as consequential as where to post the job description, who to interview and who to ultimately hire. But there are also trivial decisions, such as what time to schedule an interview or send an email to a candidate. Automating decisions in the hiring process that directly or indirectly impact who gets hired poses the risk of amplifying existing biases. If we are confident that an automation tool or algorithm will actively mitigate or overcome biases rather than amplify them, we will prioritize the automation of trivial decisions to free our customers up to focus on making the important decisions.


Data science, machine learning and AI

In alignment with our mission, Greenhouse aims to create tools that make our customers consistently great at hiring. We are well aware of the broad range of ethical concerns that are becoming public knowledge in the domain of ML and AI, and are actively seeking to create fair and unbiased outcomes for all of our stakeholders. To that end, we are constantly fine-tuning a rigorous set of QA processes to ensure data quality, information accuracy and a thorough evaluation of model fairness.


We prioritize the explainability of machine learning models

The Data science team at Greenhouse aims to ensure the algorithms we use and create can be explained to all of our stakeholders. We will avoid the use of black-box algorithms and deep learning models whenever possible. When Greenhouse releases a machine learning feature within our product, we will share an explanation of the type of model used, inputs to the model and a rationale for our modeling approach.


We do not create composite quality scores to evaluate people

Assigning numerical values to people and processes using algorithms can be dangerous, and inadvertently amplifies existing societal biases. While the goal of machine learning is to simplify complicated processes, we recognize the nuances that are lost when a single score is used to rank candidates, sources, etc. We will seek to share sufficient evaluative information with our users as they make important decisions that impact human lives.


We actively seek out and mitigate existing biases in our ML technology

Greenhouse is well aware of the range of societal biases amplified by existing ML and AI models in the hiring space. We are clear that these pernicious biases cannot be mitigated by simply removing sensitive information from ML models. We are committed to routinely reviewing models we have in our product to create more fair and equitable outcomes for candidates and to empower our users to mitigate bias in their hiring decisions.


Putting principles into action

We commit to continuous product reviews

The legal and ethical questions surrounding the use and retention of data relating to individuals are complex and ever-evolving, and there is no clear, one-size-fits-all approach that a company like Greenhouse can use to guarantee perfect results every time. In fact, we expect that we will continue to iterate on these principles as time goes on and our understanding of and experience with these matters becomes more sophisticated. In the immediate term, however, we will commit to actively applying the framework we’ve set forth here to product decisions that have the potential to impact individuals and society at large. Specifically, we will evaluate all product changes for risk of bias and unfair outcomes, as well as risk to the safety of sensitive customer and candidate data. If we become aware of a negative impact on candidates or customers in any existing aspects of our product, we will reevaluate the feature, make any necessary modifications and educate our candidates and customers about the reasoning behind these changes.


We hold ourselves accountable for decisions we made in the past

We are human beings and we aren’t perfect. We have made mistakes in the past. We will make mistakes in the future. As part of our regular planning processes, we will commit time and effort to reevaluating past decisions, owning up to the mistakes we’ve made and changing course when needed. We will also be open to feedback we receive from customers, candidates and our own employees, and be willing to act on it when it is prudent and feasible to do so.


We increase transparency with our third-party partners

Greenhouse prides itself on our robust ecosystem of partners, which allows customers to integrate their Greenhouse accounts with their existing HR tools quickly and easily. Although we trust our customers to evaluate their own tools and make their own business decisions, there is potential for a lack of transparency – with respect to both customers and candidates – about which data is shared and who bears responsibility for protecting it and for managing it in accordance with these principles.

Greenhouse will endeavor to clearly spell out, in plain language, how integration partnerships work, so that both customers and candidates are aware of the extent to which their data is being shared beyond Greenhouse. If we have reason to believe that a Greenhouse partner is actively undermining our ethical principles, we will investigate and reevaluate our continued partnership with them as necessary.