Artificial Intelligence at Visier: FAQs
Last Updated: November 25, 2024
Table of Contents
Overview
Visier has been leveraging artificial intelligence (AI) technologies for over a decade to power our people analytics platform and deliver predictions throughout our suite of Visier solutions. The following Q&A is intended to help customers understand how Visier is currently employing AI technologies.
Visier’s AI Digital Assistant - Vee
Visier uses generative AI in the form of a digital assistant designed to help users navigate the scope of their Visier People data. We call this digital assistant Vee.
Vee lets anyone in an organization ask questions about people, work, and their impact on productivity and receive immediate, accurate, and secure answers from their organization’s data.
- How does Vee work?
Vee interacts with users through a chat interface. Users ask questions in natural language and Vee answers based on the people data that the user’s organization has loaded to Visier People. To learn more about how Vee’s user experience functions, see Vee’s product page here. - What is Vee’s purpose in relation to Visier People?
Vee serves as a translation layer between a user's natural language question and Visier People’s query language. Rather than a user needing to know the ‘language’ Visier People speaks (i.e. input prompts, filters and metadata), Vee allows a user to simply ask natural, conversational questions, which Vee then translates for Visier People to surface results. Vee thereby improves the user experience of Visier People by allowing an easier interaction with the data already loaded by the customer into Visier People. - What are the steps in Vee’s architecture and data flow when interacting with Visier People?
- The user asks Vee a question in natural language.
- Vee requests translation of the natural language questions from a partnering LLM into prompts understood by Visier People.
- Vee shares those prompts with Visier People
- Visier People provides an answer to the prompts
- Those answers are then:
- Surfaced back to the user directly through the Vee interface (e.g. in the form of a supporting visual, graphic, or text); or
- Returned to the LLM to be translated back into a natural language response to the initial query. This response is then surfaced through the Vee interface to the user as a chat-based response.
- Does Visier sanitize the question or prompt submitted by the user?
Visier does not sanitize the question or prompt submitted by the user. - Does Vee keep user questions after a chat session is complete?
Vee clears all data associated with a session as soon as the user closes or refreshes their session. Information is not stored beyond the immediate context of the interaction.
LLMs
- Why does Vee partner with LLMs?
Vee uses the natural language capabilities of LLMs to effectively translate a natural language input into prompts that Visier People understands. Vee partners with third-party Large Language Models (LLMs): GPT-3.5 Turbo and GPT-4, both hosted within Microsoft Azure, to assist in this translation. Where the summary feature is selected, Vee will also leverage the LLM to translate a natural language output back to the user in direct response to a user’s chat input, creating a more conversational and contextual experience. - What data is sent to the LLMs?
The LLMs will process the user’s original input query, data contained in the related chat session, and, when the summary capability is enabled, the output of Visier People in response to a user’s input such to provide a natural language response. This allows Vee to effectively serve as a translation layer between a user's natural language question and Visier People’s query language. - Does the LLM keep this data?
The LLMs retain this data only for so long as the conversation thread/Vee session remains.
Training
- Does Vee train on customer data?
No, Vee does not train on customer data. - How is Vee trained? What is Blueprint?
Vee is taught Visier’s proprietary Blueprint, which is pre-built content made available in each Visier People module purchased by a customer; Vee must understand the pre-built content Visier has developed in order to answer user questions related to it. Blueprint is best understood as structured metadata that contextualizes user questions and maps to potential sources of output within a customer’s unique deployment of Visier People. This mapping allows Visier People to generate a relevant response to a specific user input in the context of that customer’s deployment of Visier.
Bias
- What about bias?
Visier products are built with bias mitigation in mind. For example, Visier uses LLMs that leverage bias mitigation techniques such as specific data acquisition and pre-processing, algorithmic fairness and transparency, and monitoring and evaluation. For more information, see Visier’s Bias Prevention and Transparency Statement here. - How does Visier address concerns around bias and discrimination in generative AI?
Vee’s functionality is scoped to the context of Visier People only. It is taught Visier’s proprietary Blueprint, which has been specifically curated to address employee populations in the same manner as Visier People. Vee’s training data is not generic, and aggregate data is not used to train Vee.
Transparency
- How does Vee offer users full transparency?
Users are able to see how Vee constructed its query to Visier People, and have transparency into the surfaced output. Specifically users can view the prompts, metrics and filters translated by Vee from their natural language input. Vee also allows for human intervention and correction through its interface in the event a user feels Vee has misinterpreted the user’s input. Users can upvote or downvote specific responses, which is subsequently tracked, investigated, and incorporated into future Vee updates.
Risk Management
- How does Vee mitigate common risks associated with using generative AI?
Common generative AI risks are addressed by ensuring Vee is tied to the purpose and function of Visier People as follows:- Vee’s scope does not extend beyond Visier People
- Vee follows the same security policies and procedures applicable to other Visier products. See our Customer Data Safeguards Policy for more information.
- Vee provides only factual answers available to be surfaced from Visier People. It does not guess or hallucinate.
- Vee is particularly well-equipped to answer people analytics questions because it has been trained on Visier’s Blueprint which has been specifically curated to address employee populations in the same manner as Visier People (i.e. Vee’s training data is not generic).
- What steps have been taken to ensure Vee’s accuracy?
Vee allows users to upvote or downvote specific responses. This interaction from users is tracked, investigated, and incorporated into future Vee updates, ensuring continuous improvement and maintaining a high overall degree of accuracy and effectiveness. - How does Vee handle uncertainty?
Vee contains a clarification workflow - as soon as Vee is no longer certain of what is being asked for, it will prompt the user for additional clarification or input. - How does Visier balance experimentation and innovation with risk?
Experimentation and innovation are balanced with risk through careful planning, assessment, and mitigation strategies:- Prototyping and testing new ideas on a smaller scale prior to full implementation allows for early identification of risks and challenges, so that the approach can be refined and improved before widespread deployment.
- An incremental approach to innovation is used, allowing for continuous feedback, adjustment, and refinement.
- Ongoing monitoring and evaluation are crucial to track progress, identify emerging risks, and assess the effectiveness of risk mitigation measures.
- Engaging stakeholders, including users, customers, and internal teams, is essential for understanding concerns, gathering feedback, and building support for innovative initiatives.
Customer Choice
- What choices can customers make about Vee?
Customers can choose to enable Vee. Customers can also elect to only provide the Vee experience to specific users or groups of users within their organization. Customers can choose to enable the Vee summary feature. - Which users have access to Vee?
Access to Vee is based on customer controlled access permissions and profiles associated with an individual user. These profiles are established as part of your onboarding to Visier People, with Vee adhering to the same permissions for all Visier People access as selected by customer administrators during the onboarding process (or as subsequently modified by customer).
Responsible AI Development
How does Visier ensure Vee is ethically developed?
Visier builds its AI programs, including Vee, in accordance with the following guiding principles:- We respect the evolving guidance of legislative authorities globally, including without limitation the Blueprint for an AI Bill of Rights (US), Responsible use of artificial intelligence (AI) (Canada), and the European Commission's proposed EU Regulatory framework for AI (EU).
- We believe in responsible, measured development, over innovation at all costs.
- We ascribe to high levels of transparency, accountability, and explainability.
- We value continued human oversight with appropriate checks and balances on AI autonomy.
- We prioritize data security and limit the sharing and persisting of data.
- We recognize, understand, and address inherent flaws in AI.
- We are committed to continuing to learn, to evolve, and to reevaluate with each new development.
Does Visier have internal AI expertise or dedicated resources for responsible AI development and implementation? Visier has created an internal AI Taskforce to oversee activities in AI governance strategy and ensure cross-collaboration when it comes to AI development initiatives and the use of AI technologies within Visier.
How does Visier ensure responsible AI governance? Visier is dedicated to maintaining a safe and responsible approach to our use of AI technologies as we leverage them towards better business outcomes. Our AI governance model is an extension of our core value of ethical and responsible use of data. We adhere to principles intended to (i) foster measured development in consideration of the latest standards; (ii) actively monitor and adapt to compliance requirements based on emerging regulations; and (iii) prioritize transparency and accountability in AI development by working to understand and address risks as they materialize.
Visier People Machine Learning
Visier leverages machine learning and predictive modeling as part of its Visier People analytics solution to surface results from a customer’s data.
- How does Visier People use machine learning?
Machine learning provides a predictive capability using elements of a customer’s specific people and business data to drive individualized predictive models. These models assist users to identify trends, such as employees who are, for example, likely to resign, to be promoted, and to change jobs. - How does Visier People make predictions?
Visier People uses a random forest machine learning technique to predict how likely a future event is to occur. The predictive model builds a series of decisions and computes an employee’s expected path along them, averaging historical event likelihoods to generate a result. - How do users interact with Visier People’s predictive capabilities?
Visier uses Random Forest Models to create predictions and surface the insights to end users. Predictions can be viewed in the aggregate (for example, to see how many employees from a certain role are predicted to resign in the future), as well as at the employee level (for example, to see the specific employees predicted to resign and the top contributing factors driving the prediction). Visier also includes advanced clustering models to identify similar employees across different populations, based on certain attributes or events.
Visier People offers three different avenues to help users interpret and understand the proper course of action:- Metric guides with guidance - every metric has a useful guide associated with it the user can access from within the application. Guides include not only the definition of the metric, usage and the calculations but also additional suggestions for what the user should focus on. For example, when reviewing resignations you should look at the metric and consider diversity impacts or whether high performers are affected more than poor performers. These suggestions help the user draw the appropriate conclusions and determine where to explore next.
- Drivers chart - Visier's unique drivers chart helps understand what factors are most influencing a metric. Looking at absenteeism, for example, the chart will identify the top attributes of employees consuming the most time off and identify which factors are most influential. In this way the user can understand what elements to pursue further to help take the appropriate corrective actions. Is it driven by the new hire onboarding program? Are men and women using time-off equally? Are there age components to it that should be reviewed?
- Risk of exit top 5 attributes - Visier's risk of exit predictive algorithm surfaces the attributes causing attrition within your organization and can predict, with a detailed score, who is likely to resign next. The visual surfaces the most compelling attributes for each person at risk so the user can better understand what factors are causing the risk and put in place programs to address it.
- Visier People provides a validation metric for each predictive model that lets you measure how close the number of actual exits, promotions, and internal moves were to the predicted values inside the application. You can directly verify using the data of your organization alone and report on whether a higher prediction likelihood resulted in a higher rate of actual events. Predictive success is calculated by taking the predictions for employees at one instant in time and then measuring the actual event rate of these employees in the following validation period of one year. The predictive success measure is defined as the actual event rate of the employees with the highest predicted likelihood divided by the overall event rate in the organization.
Customers are able to configure these models to use the data attributes they choose, i.e. tenure, direct manager, age, etc. This allows customers to configure the models to be specific to their needs, improve accuracy and reduce biases as needed. Changes to the data attributes being used by the model will regenerate the Random Forest Model and generate new predictions. - What kind of control do customers have over Visier People predictions?
Customers can control the attributes and metrics that are included as inputs in Visier People. Customers can also choose to turn predictive capabilities on or off in their own individual deployment of Visier. - How much data is needed to generate a predictive model?
Predictive models are most accurate when 36 months of historical data is available. - How frequently does Visier People undergo model updates?
Visier People undergoes model updates at least annually. Customer administrators also have the ability to control data attributes included in predictive models generally on a weekly refresh schedule. - How is customer data used in machine learning models retained?
Visier retains customer data throughout the contracted relationship with each customer. Customer data is deleted no more than 30 days after the agreement is over, in accordance with Visier’s Customer Data Safeguards Policy. While in-term, Customers select their own employee records for inclusion or exclusion. Once an employee record is erased, the machine learning models will no longer leverage that record as of its next scheduled refresh (which could take up to a week depending on how close this occurs to the model refresh every Sunday). - How does Visier test and verify there is no bias in our models?
Visier People is not intended to filter customer data to detect bias or discrimination. Since customers control the data attributes that are included in their models, it is up to each customer to confirm that the source data is free from bias.