Thought Leadership Articles
Unlocking the Power of Advisory Boards: Why Facilitation Matters More Than Ever
What separates a mediocre advisory board meeting from one that sparks transformative ideas? Effective facilitation.
Thought Leadership Articles
Published 25 July 2024
The impacts of AI, both positive and negative, is a hot topic around advisory board tables. With the recent announcements of AI bots being “appointed” as board advisors, its raised questions on the benefits, the risks and most importantly, the ethics, of informed decision making. Technology as a business disrupter is not new….but are we now facing a future where AI has a virtual seat at the decision making table?
The Real Estate Institute of NSW (REINSW) recently announced the self-proclaimed Australian first AI bot as an advisor to the board. The bot, Alice Ing will “provide(s) an instantaneous and valuable contribution to the Board that is without (human) peer.” In a quote attributed to the bot, Alice asserts “I will transform how REINSW conducts its business, bringing a new level of efficiency and insight to board meetings and decision-making processes.”
The UAE’s largest publicly traded entity, International Holding Company (IHC), appointed Aiden Insight as an AI board observer to “leverage advanced data analytics to provide actionable insights and risk assessments, enhancing the Board’s oversight and foresight.” While Aiden did not provide a comment, the company’s media release stated “The role of ‘Aiden Insight’ will encompass a wide range of responsibilities, including continuous data analysis, risk assessment, strategic planning support, innovation tracking, and ethical and compliance monitoring. ‘Aiden’ will attend IHC Board meetings as a non-voting observer, offering real-time insights to inform discussions and guide decisions.”
While most organisations aren’t prepared to go as far as officially appointing an AI bot to their board, the reality is more decision makers, and those that supply them with critical information, are under pressure and seeking new ways to gain an edge. Faced with a deluge of data and velocity of change its tempting to leverage the perceived benefits of AI. From conducting “research” or even feeding in key reports and board papers for a swift summary- the allure is real.
The increased pressure on organisational leaders and decision makers is a growing concern. Our 2023 State of the Market Report highlighted the “Governance Dilemma”, where governance board directors are feeling the pressure of an expanding board agenda against current board constraints. The velocity, complexity and uncertainty of information often outpaces the bandwidth, time availability and skillsets of current directors. Even leaders that are diligent in maintaining currency of knowledge are feeling the full weight of the VUCA environment that is now the norm.
Rather than demonising or ridiculing a particular approach, effective leaders leverage critical thinking to understand the context, robustly explore their options- including potential intended and unintended consequences, and make well informed decisions.
Company directors are ultimately responsible for the decisions made for and within the organisations they serve. The Australian regulator, ASIC, says:
When you make a business decision as a company director, you must, among other things, ensure that you:
At the Advisory Board Centre, we clearly advocate for the important role of best practice advisory boards within governance systems to effectively augment informed decision making.
Well-structured advisory boards offer many advantages to organisations that value informed decision making.
While an AI algorithm may process vast amounts of data (content) at speed, it will always be limited in its ability to interpret the context of both the data and the decision-making environment. Advisors can aid in the exploration of qualitative factors or cultural nuances that can influence decision-making.
AI bots are marketed as supporting data-driven decisions that are highly accurate. Unfortunately, the data simply doesn’t support this except in very narrow use cases. There are varying estimates of accuracy in generative AI which are prone providing incorrect, irrelevant or nonsensical information, known as hallucinations. A recent Google research analysis found the that the best models only achieved 52.9% accuracy overall. LLMs are not currently effective at identifying mistakes or logical errors. While both humans and bots require quality inputs, when you have robust debate humans are more likely to detect errors in reasoning and track down mistakes in data.
One of the purported advantages of “data-driven decision making” is freedom from bias. The reality is that all data, and all decision making (even AI assisted) is prone to bias. Bias, is not always a negative. An organisation that has a well-articulated understanding of their risk profile and access to resources may have a bias towards innovation. ChatGPT, like other AI models, can exhibit several types of biases in its decision-making or content generation. These biases can stem from various sources, including the data it was trained on, the design and implementation of the model, and the way it interacts with users. Quality advisors are able to articulate their own perceived biases, and also help organisations to constructively explore internal biases that may influence decision making (in fact, a personal self-reflection and declaration of intended and unintended biases is one way that quality advisors can demonstrate best practice and ethical engagement.)
One of the inherent strengths of best practice advisory boards is the ability to tailor the formation and implementation to meet the needs of the organisation and its objectives. From project based advisory boards assembled to explore a particular objective, to the ability to add or remove members based on skillsets, contribution and impact- advisory boards do not require the same level of technical programming and maintenance of AI enabled tools.
An AI bot is not a person. Decision makers who rely on AI bots to inform their decision making may find themselves in a precarious position regarding liability and regulatory compliance. The complexity of the underlying technology also increases the difficulty for decision makers to effectively communicate how decisions were made and demonstrate accountability to their stakeholders. And if you contract out the development of the bot- who actually “owns” decision making in your organisation?
Organisational leaders often need to make decisions based on ethical and moral considerations. AI, while effective at data analysis, lacks the ability to make ethical judgments or consider the broader societal impact of decisions in the same way humans can. Articulating the situational ethics that underpin decision making is a skill that is outside of AI’s ability.
In what feels like an almost Orwellian exercise, we used ChatGPT to explore this issue. We asked “Would OpenAI appoint an AI bot to their board of directors?” The response, included below, was telling.
As of now, it’s unlikely that OpenAI or any organization would appoint an AI bot to their board of directors. OpenAI would likely view AI as a tool to assist and enhance the effectiveness of its board of directors rather than as a replacement for human members. The organization might explore innovative ways to integrate AI into the board’s processes, but the final decision-making authority would remain with human directors. In summary, while AI has the potential to significantly enhance the capabilities of a board of directors through data analysis and decision support, the complexities of legal, ethical, and practical considerations make it unlikely for AI to be appointed as a board member in the foreseeable future.
AI enabled tools will be increasing used within organisations at all levels. Decision makers would be prudent in carefully exploring the intended and unintended consequences and ensure there are strong guardrails in place. The market is booming with technology providers and consultants currently seeing the greatest gains, which requires organisational leaders to exercise a greater level of diligence before deciding who, or what, to trust.
Creating a project based advisory group, like a reference group or steering committee, to explore the AI enabled options and organisational guardrails is an effective way to ensure the opportunity is not missed and the challenges are not magnified.
Establish an Advisory Board