By Dr. Frank Appiah  |  07/05/2024


balancing scales with glowing letters AI

 

As artificial intelligence (AI) rapidly transforms our world, a crucial question emerges: how do we ensure it's used for good? How can humans help AI tools to make fair and unbiased decisions?

These challenges necessitate a balanced approach to AI governance, strong ethical frameworks, and transparent laws and regulations. By prioritizing the development of AI systems that are fair, accountable, and beneficial to society, we can leverage the power of AI technology to address some of humanity's most pressing issues while reducing its risks.

AI governance aims to police what we do with AI with an emphasis on being used for good. For instance, we should embrace the concept that AI is a machine that has no feelings, cannot reason, and cannot distinguish good from evil. But AI governance and responsible artificial intelligence applications can shape a future where this powerful technology benefits everyone.

 

Understanding AI Governance

Artificial intelligence governance refers to the policies, regulations, and ethical guidelines that govern the development, deployment, and use of AI technologies. It encompasses a range of issues, including data privacy, algorithmic transparency, accountability, and fairness.

Through collaboration, AI practitioners, educators, and governments can propose solutions that can help ensure the equitable and safe use of AI. One tool would involve partnering with private-sector businesses to track graphics processing unit (GPU) usage so that they can be well accounted for via logging.

Another tool could be an AI registery similar to a database for drivers’ licenses. This registry could log the use of all AI tools. ChatGPT®, Bidirectional Encoder Representations from Transformers (BERT®), and DALL-E® are just a few examples of what tools could be monitored with proper governance.

If an algorithm of an AI tool is used by consumers or non-creators, it must be logged and its performance and impact must be evaluated. Once logged, the algorithm will receive a unique code to identify its characteristics, purpose, developers, and ownership.

There should also be a quick kill process that can be initiated by the governing board to stop the algorithm in its track before it spins out of control, in the event that something goes wrong. This registry would also necessitate the formation of a governing body that represents practitioners, governments, students, teachers, and users.

The core of the governing body should be broad enough to be representative and narrow enough to be effective.

 

Why AI Governance Matters

AI can be used for good or bad, like any other technology that has ever been developed. The difference is that AI is a new frontier that touches almost everything in our daily lives, and it can have overreaching consequence if used for evil purposes. A rapid advancement of AI models and systems will provide a combination of immense opportunities and benefits as well as significant challenges.

Without responsible AI governance, this technological advancement could lead to unintended consequences, such as:

  • Reinforcing biases
  • Infringing on privacy
  • Causing economic disruptions
  • Turning against humanity

But trustworthy AI governance will steer us towards a future where AI's benefits are maximized and its risks minimized. Here are some thought-provoking insights that could inspire you to ask more questions and join the debate on AI oversight.

 

Risks

AI systems can inadvertently perpetuate biases present in the data used to train them. This bias can result in unfair treatment of certain groups, reinforcing societal inequalities.

Robust governance frameworks, however, can help mitigate legal risks by ensuring that AI systems are designed and tested for fairness and equity. Transparency and data quality would also be increased.

 

Accountability

As AI systems make more decisions that impact human lives – such as the AI technology used for self-driving cars – ensuring compliance and accountability will become paramount. An AI governance framework could establish clear guidelines for who is responsible when AI systems fail or cause harm.

This accountability is crucial for maintaining public trust in AI technologies and societal values. The purpose will not be to tamper innovation, but to make it robust and useful.

 

Privacy 

AI systems often rely on vast amounts of data, and some of that data is personal data acquired from the open web. Effective data governance can put policies in place to ensure that the data used to train AI algorithms is collected, stored, and used in ways that respect individuals' privacy rights.

Legal regulations such as the General Data Protection Regulation (GDPR) in Europe set important precedents for data governance and the protection of data privacy. Through international collaboration, this type of legislation could be adopted by other nations, especially with the aid of an AI registry.

 

Transparency

Transparency in AI algorithms and decision-making processes will foster trust between AI development and user communities. Governance frameworks such as a registry for monitoring AI in production will mandate the disclosure of how AI systems work, enabling users to understand and challenge the decisions made by these systems. This transparency is vital for ensuring that AI operates in the public interest.

 

Employment Impact 

Employers are gearing up to incorporate AI into their mainstream tasks. However, the integration of AI into the workplace presents a paradoxical scenario.

On one hand, AI has the potential to increase productivity and create new job opportunities. However, it also threatens to displace a significant number of people, especially those people who perform routine, repetitive tasks.

For instance, a study by McKinsey Global Institute suggests that up to 800 million jobs could be lost to automation by 2030. While this development could be tragic for workers in manufacturing, transportation, and customer service, it can also be an opportunity to repurpose the skills of these workers through cooperation and global AI governance. After all, the World Economic Forum predicts that AI will ultimately lead to more jobs.

While there seems to be some chaos around potential job loss, many others see AI adoption as a benefit. Global cooperation in the use of this new technology will be helpful in – at minimum – retraining employees whose tasks have been or will be replaced with AI. For instance, mandating employers to have alternative routes to train employees on how to increase their productivity and efficiency using AI could be a good place to start.

In the same vein, AI governance can motivate employees and employers to transform the workforce skills tied to AI. As AI continues to bury its roots deep in our society, there is a growing demand for employees with expertise in AI, machine learning, and data science.

This shift requires substantial investment in education and training programs to equip workers with the necessary skills to thrive in an AI-driven economy. AI governance bodies can collaborate with different organizations to develop policies that support workforce retraining and continuous learning.

 

Sustainability and Environmental Impact 

The impact of AI on the environment is too often sidestepped in favor of the benefits of this technology. However, the energy, computer power, and resources that drive this technology can have significant environmental implications.

Take for instance the training of an AI algorithm, which requires a lot of computing power even for lighter-weight models. As the AI algorithm’s performances scale so does the need for more computer power. For instance, a 500-parameter large language model (LLM) may perform a lot better when the number of parameters increases to 1 billion.

This radical increase in performance requires a lot of computational power and comes at a high cost to companies and the environment as a result. To put it in perspective, data centers that host AI training processes consume large amounts of energy, emitting carbon footprints equivalent to five cars over their lifetimes.

This high consumption of energy underscores the need for developing more energy-efficient algorithms that use less computing power and promoting the use of renewable energy sources in AI operations. Some improvements have been made in this arena so that to repurpose an LLM to other tasks, it’s not necessary to retrain the entire architecture of the algorithm.

Instead, it is possible to train limited layers in the AI architecture to accomplish the task. However, AI governance that fosters collaboration between the industry and government can help speed up the energy saving process.

 

Rare Mineral Usage and Renewable Energies Integration 

The hardware that propels AI infrastructure such as GPU and specialized processes requires the use of rare minerals, such as lithium and cobalt.

The extraction and processing of these minerals have significant environment and ethical implications, including the exploitation of labor practices in underdeveloped countries, habitat destruction, and water pollution. These problems can lead to landscape transformation, paving way for catastrophic events such as mudslides and flooding.

Implementing AI governance where governments, industries, and international bodies monitor the mining of such minerals preserves and ensures sustainable and ethical sourcing of rare minerals. Regulations can be created to enforce responsible mining practices across the globe and the use of certified supply chains.

For example, existing policies such as the European Union's Conflict Minerals Regulation and the Dodd-Frank Act Section 1502 address the sourcing of minerals from conflict-affected areas. This legislation could be enhanced to cover potential unhealthy practices that can have devastating impacts on the environment.

Through AI governance, the promotion of renewable energy sources usage in data centers and AI operations to reduce their carbon footprint can be improved. Companies like Google® and Microsoft® have already committed to powering their data centers with 100% renewable energy, setting benchmarks for the industry.

AI governance policies, including subsidies and grants, can be implemented to support the integration of renewable energies such as solar power and wind power.

 

Innovation

Part of maintaining an AI registry is ensuring that a well-structured AI governance strategy promotes innovation via clear guidelines and standards for artificial intelligence development. This type of innovation will reduce uncertainty, create uniformity, and level the playing field in society. There will be increased trust between users and AI developers, encouraging organizations to invest in AI technologies while adhering to ethical standards.

 

The Leading Components of AI Governance

Effective AI governance frameworks are constantly changing and would typically include these components:

  • Ethical guidelines – Guidelines would outline the values and principles that should dictate AI development and usage. These guidelines would include principles of fairness, transparency, accountability, privacy, and respect for human rights. Values should be structured to underscore the framework in which AI governance practices must operate. These values should be broad and dynamic so that as artificial intelligence technology advances, they can be easily modified.
  • Regulatory policies  A legal framework for AI governance could be created to define the rules and standards for responsible AI development, deployment, and usage. It ensures compliance with ethical guidelines and protecting public interests at the same time. 
  • Oversight mechanisms – Independent regulatory bodies or ethics committees, such as corporate AI ethics boards, could monitor AI systems and enforce compliance with governance frameworks. These organizations could ensure that AI technologies are used responsibly and ethically. They would also need the legal support to regulate industries and be positioned to reward and punish organizations as needed to maintain order.
  • Public engagement – Diverse stakeholders, including marginalized communities, could be integrated into the governance process. As a result, AI technologies would reflect a wide range of perspectives and address societal needs.
  • Continuous monitoring and evaluation – Regular, ongoing monitoring and evaluation would allow for the assessment of AI systems' impact over time. Policies and guidelines could then be adjusted as needed to address emerging challenges.

 

Harnessing AI's Full Potential Ethically and Equitably

AI governance is vital for ensuring that AI technologies are developed and used in ways that benefit society in general. By addressing risks, ensuring accountability, protecting privacy, promoting transparency, and fostering innovation, robust AI governance frameworks can guide the responsible development and deployment of AI tools to positively impact lives with minor to no unintended consequences.

As we navigate the transformative potential of AI, thoughtful and inclusive AI governance will be key to shaping a future where this powerful technology serves the greater good. It’s a collective responsibility we must embrace to harness AI's full potential ethically and equitably.

 

Data Science Degrees at American Military University

For students interested in AI governance and other related topics, American Military University (AMU) offers an online associate online associate degree in data science in data science and an online bachelor’s degree in data an online bachelor’s degree in data science. These programs are designed for students and data science professionals who are interested in improving or applying their data science skills to various fields such as business, finance, healthcare, and engineering. The courses are taught by experienced faculty who bring real-world insights into the classroom.

The curriculum focuses on training students in essential data science skills, including recognizing data requirements, efficiently collecting and organizing data, conducting and evaluating analyses, and effectively communicating results through visualization. Students have the chance to develop their abilities to deliver reproducible analyses, communicate findings to diverse audiences, and apply their technical knowledge across various fields.

ChatGPT is a registered trademark of OpenAI OpCo, LLC.
BERT is a registered trademark of Google, Inc.
DALL-E is a registered trademark of OpenAI OpCo, LLC.
Google is a registered trademark of Google, Inc.
Microsoft is a registered trademark of the Microsoft Corporation.


About The Author

Dr. Frank Appiah is a faculty member in the School of Science, Technology, Engineering and Math (STEM). He is a trained statistician with over 14 years of experience in industry and academia and very passionate about data science and its applications, ranging from teaching classes in data science concepts to uncovering new ways of modernizing medicine. Frank holds a B.Ed. in mathematics from the University of Cape Coast, an M.S. in mathematics from Youngstown State University, an M.S. in statistics from the University of Kentucky, and a Ph.D. in epidemiology and biostatistics from the University of Kentucky.