I remember when AI was "computer science that doesn't work yet". Now, as artificial intelligence reshapes our world, a widespread use and potential integration of AI language models into commonly used office software signals a new era – feeling similar to only few other dramatic changes in technology, for example like the dawn of the world wide web in the early 1990s, yet with a different quality to it.
With these models increasingly influence in our daily processes, it is crucial to urgently address their implications for innovation, intellectual property, and their effects on the complex systems we operate: markets, organisations, and business and political relationships. The question is whether this development (or revolution, depending on your point of view) will actually unleash new creative potential or inadvertently be an obstacle to human ingenuity that has driven our progress till now.
A major yet overlooked concern is the convergence of ideas and the risk of AI-induced 'groupthink' [1]. An anticipated widespread adoption of language models could lead to the homogenisation of ideas and strategies, reducing creative problem-solving and diversity of thought. This phenomenon can be compared to soldiers marching in lockstep on a bridge, causing its collapse due to the amplified effect of their synchronised movements. Similarly, widespread use of (at least currently) very limited number of AI language models can create situations where everyone moves in lockstep, leading to a decline in creativity and diverse thought. AI has the opportunity to be a catalyst for (human) creativity, but the risk is it may actually do the opposite.
A related issue is the inherent bias present in AI language models [2]. While much criticism has been rightfully directed at the specific biases of such models, their effects on underrepresented groups, or open display of racism, more insidious problems may also arise from the network effect of their widespread adoption. Even if individual biases were mitigated, all models will inevitably retain some bias. The cumulative effect of these subtle biases could lead to severe consequences, as large-scale use of only very few such models may amplify and reinforce them. It is essential to acknowledge and address this issue to ensure that AI serves as a tool for fostering diverse and inclusive innovation.
New and significant risks also emerge in the domain of intellectual property – when companies worldwide run their ideas and strategies through a central language model that may learn from office documents, presentations, and emails. This, quite likely, will lead to confidential information and innovative concepts to unintentionally flow into the AI model, expanding its knowledge base. As a result, company secrets and intellectual property may inadvertently become available to other organisations or individuals using the same model, a threat to competitive advantages and to security of trade secrets.
Critical thinking and diversity are crucial to innovation. Models that represent an average of "everything" that is available digitally, with ethical standards set by a handful of organisations, have only little chance to contribute to either critical thinking or diversity if applied on the large scale.
This final thought leads to a widespread application of such models and its effect on complex systems, resulting in unpredictable consequences and feedback loops. These dynamics can amplify existing biases, create new vulnerabilities, and disrupt the delicate balance in networked systems. The financial industry offers an example that illustrates the potential global dynamics: imagine multiple financial institutions using the same AI language model to develop trading strategies and risk assessments. A sudden, unforeseen market event not detected by AI could lead to all institutions taking similar actions simultaneously, resulting in a massive wave of selling and a potentially catastrophic collapse of financial markets. Many more examples are thinkable.
I am actually (carefully) positive about the opportunities in front of us, but they will require sensitive navigation. We will need to overcome challenges of 'AI groupthink' and information security. This will require a deeper understanding of the role of AI in complex systems.
I can think of several potential remedies. First, promoting the diversification of AI models can help reduce the homogenisation of ideas and foster creative problem-solving. The cost of training and operating such models means that this is also something that can (and should) be supported by national AI strategies.
Second, encouraging collaboration between AI developers, policymakers, businesses, including education on how to avoid inadvertent sharing of information can help create a shared understanding of the risks and opportunities. This will also have to lead into the development of best practices and (updated) regulatory frameworks and strategies (e.g., [3,4]).
Last but not least, the development and sharing of open-source AI models and tools can further promote diversity and innovation in AI development and usage, while also allowing for greater scrutiny and improvement by the wider community. These potential solutions will require work towards mitigating the risks that also come with potential availability of language models, and a close look at trade-offs.
Overall, it is critical to understand the potential risks of a widespread use of AI language models to innovation, intellectual property and the systems that we operate and live in. These challenges need addressing, and also require a continuing discussion about the wider impact of such technology. The discussion will be important for the right balance in our use of AI in daily decision making, and how we maintain our capacity to be innovative and creative.
[1] Jay Dixit. Algorithmic Bias Is Groupthink Gone Digital, 2019.
https://neuroleadership.com/your-brain-at-work/algorithmic-bias-groupthink-gone-digital/
[2] James Arvanitakis, Andrew Francis, Oliver Obst.
Data ethics is more than just what we do with data, it’s also about who’s doing it, 2018.
https://theconversation.com/data-ethics-is-more-than-just-what-we-do-with-data-its-also-about-whos-doing-it-98010
[3] Australia’s Artificial Intelligence Ethics Framework
https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework
[4] Artificial Intelligence Strategy of the German Federal Government, 2020 Update.
https://www.ki-strategie-deutschland.de/files/downloads/Fortschreibung_KI-Strategie_engl.pdf