Summary:
When deploying artificial intelligence, companies should work closely with their legal teams to identify and address the legal risks associated with the use of AI technologies. Key legal considerations include licensing, privacy, and intellectual property issues, all of which must be thoroughly assessed to ensure compliance with relevant laws and regulations. Secure contracts should be developed not only with technology providers but also with internal talent involved in AI projects. Beyond these specific considerations, organisations need to stay informed about the broader legal and regulatory landscape, which is rapidly evolving as governments and regulators increasingly focus on governing AI use.
In 1995, Clifford Stoll, a writer at Newsweek, famously remarked, “The truth is no online database will replace your daily newspaper, no CD-ROM can take the place of a competent teacher, and no computer network will change the way government works… Even if there were a trustworthy way to send money over the Internet—which there isn’t—the network is missing a most essential ingredient of capitalism: salespeople.”[1] Stoll’s scepticism about the internet’s future impact has since been thoroughly disproven, with the internet becoming a cornerstone of modern society. However, his sentiments from 1995 mirror the current scepticism many hold regarding artificial intelligence (AI) today.
While the average person might afford to remain indifferent to AI, business owners and leaders cannot. In the competitive landscape of modern business, a lack of interest in AI or a failure to explore its potential could lead to a significant loss of relevance and revenue. In fact, the ability to effectively leverage AI may become the decisive factor between businesses that thrive and those that falter.
For businesses seeking to stay ahead of the curve as technology continues to evolve, it is crucial to develop an understanding of how AI can be integrated into their operations to deliver substantial value. The first step is gaining a clear understanding of what AI actually is. In simple terms, AI is a technology that enables computers and machines to simulate human intelligence and problem-solving capabilities.[2] Although the discussions around AI might suggest it is a completely new phenomenon, most businesses have already engaged with it in some form, often without realising it. For instance, customer chatbots on retail websites and voice assistants like Apple’s Siri and Amazon’s Alexa are everyday examples of AI in action, and they have been part of our lives for years.
Once business leaders have a firm grasp of AI, the next step is to assess its value for their specific business context. Once the potential benefits of AI adoption are clear, they should develop a comprehensive strategy for AI implementation. When it comes to deploying AI within an organisation, two primary schools of thought have emerged. The first school of thought views AI as a collection of tools and applications that can be deployed in various parts of a business. This approach encourages companies to carefully review their structure and operations to identify specific areas where AI could be integrated. For example, organisations that want to increase productivity may utilise AI to automate routine tasks, or analyse large datasets for insights.
The second school of thought considers AI as a foundational technology that has the potential to underpin the entire operation of an organisation. This perspective urges business leaders to rethink the very nature of their organisation’s operations, focusing on how AI can be embedded into the core processes that drive the business. Rather than simply adding AI to existing processes, this approach advocates for a fundamental reimagining of how the organisation functions, leveraging AI to create new opportunities for innovation and growth.
While neither approach is inherently right or wrong, there is a compelling argument to be made that the companies currently leading in AI adoption and realising significant value are those that have embraced the second, more transformative approach.
Adopting Artificial Intelligence: Key Considerations
Regardless of the approach an organisation chooses, business leaders must ask themselves a critical question when developing an AI implementation strategy: What are the specific goals and objectives they hope to achieve with AI? The answer to this question will guide the selection of AI technologies and models that are most suitable for the organisation’s needs.
However, there is a growing tendency among businesses to adopt prevailing AI technologies, such as generative AI,[3] without conducting the necessary research to determine whether these technologies are the best fit for their specific context. This approach can result in organisations not fully reaping the benefits of AI and may lead to difficulties in justifying the allocation of time and resources towards AI initiatives. To avoid this pitfall, business leaders should take the time to identify the AI technologies that align most closely with their industry, the nature of their business, and their strategic goals.
When adopting AI, there are several critical factors that leaders must consider to ensure that they are deploying and utilising AI in a manner that will drive value within their organisation.
Crucially, Business leaders will have to assess if they have the right internal ecosystem[4] for the implementation of AI.
- Talent and Skills Development
One of the major concerns surrounding AI adoption is the fear that it will lead to job displacement as machines take over tasks previously performed by humans. However, the reality is that AI also creates new opportunities for employment, particularly in areas such as AI research, development, implementation, and maintenance. Companies need to identify the specific talent required to execute their AI strategy and develop a recruitment strategy that prioritises hiring individuals with expertise in AI.
In addition to recruitment, robust training programs are essential for employees who will be using AI technologies. As AI continues to evolve, ongoing skill development will be necessary to ensure that employees can effectively utilise new tools and technologies as they emerge. A company that fails to invest in its talent will likely struggle to derive the full value from the AI technologies it implements.
- Risk Management
As with any technology, the use of AI comes with its own set of risks. Business leaders must ensure that appropriate risk mitigation practices are in place to limit the potential downsides of AI adoption. The specific risks associated with AI will vary depending on the technology being used and the industry in which the organisation operates. For example, the rise of generative AI has brought increased attention to risks such as bias in AI-generated outputs.
Before AI technology is fully integrated into an organisation, a thorough risk assessment should be conducted to identify potential challenges and develop strategies to address them. Regular risk audits should be performed to ensure that any emerging risks are promptly identified and mitigated. Additionally, robust testing of AI technologies should be carried out before full-scale deployment, not only by those implementing the technology but also by the employees who will be using it. A failure to conduct a comprehensive risk assessment and implement risk mitigation practices can result in serious reputational and legal consequences for companies utilising AI technology.
- Processes and Operations
The deployment of AI will inevitably impact the process flow within an organisation. An analytical review of the organisation’s processes should be undertaken to determine whether they need to be redesigned to fully leverage the capabilities of AI. Clear processes must be established to rapidly resolve any issues that arise, and it is important to identify which elements of the AI technology will require human oversight.
The goal of any AI implementation should be to achieve structured scalability, which can only be realised by eliminating silos within the organisation and prioritising collaboration across teams. Leveraging the skills and perspectives of employees from different areas of the business will be critical to ensuring that AI solutions are effectively integrated and deliver
the desired outcomes. Additionally, securing funding and establishing a budget process for the implementation and scaling of AI solutions should be a priority for business leaders.
- Security, Technology, and Scalability
Cybersecurity is a key consideration when deploying AI technologies, particularly as these technologies often involve the processing of sensitive data. Organisations should establish strong cybersecurity processes, policies, and practices to prevent unauthorised access and misuse of AI technologies. In-depth training for employees on the use of AI technologies is essential, as is the implementation of privacy and security mechanisms to prevent data breaches, especially when using publicly available AI solutions.
Business leaders who realise the value of AI adoption will seek to achieve scale and full integration within the organisation. To do this, detailed roadmaps for scaling AI across the organisation should be developed, which will involve assessing various use cases for AI and reviewing and potentially redesigning business processes where necessary. Companies should also evaluate whether they have the necessary technology infrastructure and technical expertise to support the scaling of AI technologies.
- Legal Considerations
Legal considerations are closely intertwined with risk management when it comes to AI adoption. Companies should work closely with their legal teams to identify and address the legal risks associated with AI technologies. These risks will vary depending on whether the company is developing its own AI technologies or utilising existing ones. For example, the legal implications of piloting new AI technologies may differ significantly from those associated with using publicly available AI solutions or customising them for specific business needs.
Key legal considerations include licensing, privacy, and intellectual property issues, all of which must be thoroughly assessed to ensure compliance with relevant laws and regulations. Secure contracts should be developed not only with technology providers but also with internal talent involved in AI projects. Beyond these specific considerations, organisations need to stay informed about the broader legal and regulatory landscape, which is rapidly evolving as governments and regulators increasingly focus on governing AI use. For instance, the Council of the European Union recently approved the EU Artificial Intelligence Act,[5] reflecting a growing emphasis on AI regulation. Moreover, countries are forming strategic partnerships, such as the Global Partnership on Artificial Intelligence,[6] which advances an agenda for implementing human-centric, safe, secure, and trustworthy AI. This partnership, comprising 44 members including China, Brazil, and India, highlights the global nature of AI regulation and the need for organisations to be aware of both national and international developments in this area.
The Nigerian Regulatory Landscape
As AI technology rapidly advances, governments and regulatory authorities around the world are working to keep pace by developing policies and regulations to guide the use and development of AI. In Nigeria, there are currently no specific laws targeting AI, its use, or its development. However, existing legal frameworks do cover certain aspects of AI. For example, the Nigeria Data Protection Act applies to AI technologies that process personal data, regulating how data controllers and processors manage this information. Similarly, the Nigeria Cybercrimes Act provides a legal framework for the prevention, prosecution, and punishment of cybercrimes, which can be relevant to AI technologies in the context of cybersecurity, intellectual property, and privacy rights.
Although Nigeria has not yet issued a comprehensive AI-specific regulatory framework, the country is taking steps towards developing a national AI strategy.[7] In August 2024, the Federal Ministry of Communication, Innovation and Digital Economy (FMCIDE) released a draft National Artificial Intelligence Strategy. The strategy seeks to serve as a roadmap and guiding framework as Nigeria charts its course towards optimising the benefits of AI and was shared with the public for feedback from stakeholders and experts. This process presents an opportunity for business leaders who understand AI’s value and the challenges it presents to provide input and influence the direction of AI regulation in Nigeria.
Regulatory Trends in Nigeria
As AI adoption in Nigeria continues to grow, the regulatory landscape is likely to evolve in response to new challenges and opportunities. Currently, AI regulation in Nigeria is still in its infancy, but future frameworks are expected to align with global trends, focusing on critical areas such as data protection, accountability, and security.
For now, Nigeria is adopting a hybrid approach to AI regulation, combining industry self-regulation with existing legal frameworks. However, as AI technologies become more widespread, the government is expected to take a more active role in regulating the development and use of AI. The goal will be to strike a balance between fostering innovation and ensuring that AI technologies are developed and utilised responsibly.
Conclusion
AI adoption by businesses has accelerated dramatically, with generative AI emerging as a key trend. Companies that fail to explore AI’s potential risk being left behind, and the consequences could be severe. When it comes to AI adoption, organisations fall into three distinct categories: innovators, adopters, and laggards. Innovators are at the forefront of AI development, creating new technologies that are transforming industries and writing the playbook for the development and implementation of AI. Adopters are leveraging proven public AI technologies to enhance their operations and stay competitive with some customising these technologies with their data. Laggards, on the other hand, are slow to adopt AI, and they risk falling behind as their competitors leverage AI.
Successful business leaders are those who recognise the potential of AI and strategically integrate it into their operations. However, it is important to avoid the temptation to adopt AI simply to keep up with industry trends. Doing so can lead to the implementation of AI solutions that do not align with the organisation’s goals, resulting in wasted resources and missed opportunities. Instead, organisations should focus on adopting and deploying AI models that genuinely create value and contribute to achieving their strategic objectives.
As AI technology continues to evolve, businesses must remain agile and adaptable, continually assessing and refining their AI strategies to stay ahead of the competition. By doing so, they can ensure that they are not only keeping pace with technological advancements but also positioning themselves to capitalise on the opportunities that AI presents.
If you want to learn more about AI adoption and the factors to consider, email insights@xentialp.com.
[1] Clifford Stoll, The Internet? Bah!, Newsweek (Feb. 27, 1995).
[2] What is Artificial Intelligence (AI)? IBM, https://www.ibm.com/topics/artificial-intelligence
[3] Generative AI models are trained on vast, diverse data sets. They take unstructured data, such as text, as inputs and produce unique outputs. Technology Trends Outlook 2024, McKinsey & Company (July 2024)
[4] This is distinct from the external ecosystem that is required for scaling technology adoption. External ecosystem focuses on user trust and readiness, business model economics, regulatory environment and talent availability. These ecosystem factors will vary by geography and industry. Businesses should have an awareness of the external ecosystem they operate within and how they can affect the adoption and successful scaling. Technology Trends Outlook 2024, McKinsey & Company (July 2024)
[5] The Act was approved on May 21, 2024. The approval by the Council of the European Union is the final stage in the European Union legislative process. The Act assigns AI applications to three risk categories: unacceptable risk, high risk and low risk.
[6] https://www.oecd.org/en/about/programmes/global-partnership-on-artificial-intelligence.html
[7] A number of countries have developed national artificial intelligence strategies including China, South Korea and Brazil.