top of page
c837a6_3f53cdbff83b47b19a97bd35c767333c~mv2.webp

AI Governance Essentials Training

Build Your AI Literacy with this Intro to Responsible AI


In this free 30-minute introductory training course, you will learn key terms and concepts of Responsible AI Governance. The AI Governance Essentials Course emphasizes the importance of responsible AI practices in organizations. Responsible AI governance ensures safe and ethical AI use, enhances decision-making, accelerates AI adoption, and protects reputational trust. Key objectives include understanding AI applications, risks, and best practices. Five primary risks—regulatory, operational, reputational, ethical, and legal — must be managed. Responsible AI principles include fairness, reliability, privacy, inclusiveness, transparency, and accountability. The 5 foundational AI best practices include:

  1. Inventorying AI projects (learn more about AI Guardian's AI registry here)

  2. Establishing AI project approval criteria

  3. Issuing a sensible AI policy (find a template here)

  4. Setting up an AI committee (find a sample committee charter here)

  5. Managing AI governance in a centralized system (request a demo of AI Guardian here)


To continue your journey through AI Literacy to AI Fluency, request access to the full library of AI Guardian training materials here.





Full Transcript: Hello and welcome to this training tutorial on AI governance essentials, empowering responsible AI practices in your professional role. This presentation in this tutorial is produced by AI Guardian for the benefit of AI responsible AI training for those managing the usage of AI in organizations and companies.

Let's start with what is AI governance first. It's just a series of principles and frameworks that ensure the responsible use of AI. Many companies today are adopting AI rather quickly, and this provides a framework that allows you to do so safely and responsibly. It encompasses the processes that direct, manage and monitor AI activities, and it provides comprehensive oversight and documentation of AI models and their outputs, as well as the use of AI applications across the organization. This is particularly important because so much of the work in AI is not necessarily AI model building, but the use of AI in your day-to-day role.

Today we will talk a little bit about AI governance and its benefits. The benefits of AI governance are very clear:

First, it provides improved enterprise decision making and execution. It gives you a roadmap and an approach to how you should take on AI.

Second, it provides an accelerated AI adoption and long-term innovation success for you and your coworkers.

It protects your reputational trust across stakeholders and end users.

It minimizes governmental, regulatory and legal risk,

and it helps align the ethical principles involved in the use of AI with your company's mission.

These benefits are incredibly important for every organization as they move forward, and ones that you should see if you take these training module lessons to heart.

In today's tutorial, we're going to talk about a few objectives within this curriculum. The curriculum is designed to help you gain an understanding and proficiency of these five areas.

First, AI governance itself and your role in supporting it.

Second, AI risks and how to mitigate them.

Third, ethical and societal considerations of AI systems, particularly because the outputs of these AI systems have vast and broad consequences for broader communities past your organization.

Next, responsible AI is a framework for AI governance. So how do we put together responsible AI framework that helps you drive this forward?

And then we'll give you practical examples of AI governance best practices that you can bring into your organization and apply yourself.

AI governance and you. Why is this so important for you on this AI training, as well as the application of it after you complete this?

First, governance is a team sport, requiring everyone's contribution towards implemented frameworks. It is a part of a culture and a DNA when it is successful.

Second, everyone involved in developing, deploying or using AI systems have a shared responsibility to ensure responsible AI best practices are maintained. That includes all groups, all levels up through the top of an organization, all the way down to the mainline implementation teams across an org. And this is particularly important in AI because AI applications span entire organizations.

There are a variety of applications for AI that both help AI efficiency realization, the ability to do repeatable, scalable tasks, faster, easier, with less input. And there are also a host of AI product experience and experience innovation that most companies will begin to undertake as they adopt AI more holistically, these activities flow through every function at an organization, from sales and services through your marketing organizations, your development and product teams and even backup Office groups such as finance, HR and legal, who play vital roles in an organization where this tech is going to be applied quickly.

Now, the primary application areas today for AI fall in these areas:

search and discovery, the ability to find information and synthesize it quicker,

document synthesis and summarization, the ability to take the vast amount of data at a company and turn it into something actionable.

Third, customer service and support, whether it's back end support that helps summarize and drive escalation issues properly within an org or front end in many organizations where you're bringing chat bots and other capabilities to the forefront with your consumer experience,

coding and development - Development teams using coding assistance through new AI applications, the ability to do QA quicker and faster, the ability to do documentation and summarization of work in an orderly manner.

Next, you're seeing a lot of teams on the marketing side create content and creative within the IP rules of LLMs and other AI applications.

You're seeing your sales outreach and other like touch points, helping from an AI perspective with generation and

then lastly, healthcare diagnostics and other areas. this is particularly important in these applications, as many of these fall under what is called high risk AI use cases:

healthcare, HR and hiring practices, financial practices

the ability to use AI in these areas has to be taken very seriously, as the outcomes of AI have real consequences on the communities which it's applied to.

As companies, and your company in particular, bring AI in and integrate it, the business risks associated with this application can mount quickly. There are five primary types of risks everyone at your organization should be aware of and conscious of.

The first two are regulatory and legal. There is a host of regulation, at the state, federal and global levels, that have to be taken in consideration. Some of that comes in the form of regulation, some of that comes in the form of laws actually being passed, but both have real consequences, if not followed properly.

The third is operational risk. There is a real risk of disclosing proprietary information that should not be disclosed to outside systems, and business disruption if AI is not implemented properly.

Next is reputational risk. There is a risk of negative sentiment from poorly implemented AI efforts for everything you do. The idea of how this helps the company is important, but also important is, how is this AI going to reflect on the company and our values?

And then lastly, ethical? Is this the right thing to do? Is a question that needs to come into the forefront of every AI discussion, particularly around areas where AI can amplify things like bias and fairness and expose vast interactions to those issues

Now in understanding your role in identifying broader societal risks from a deployment, it's important to understand that this is a fast-changing area currently, and there's a lot of societal risk areas that are coming into play that need to be considered alongside all of the work you do with AI applications:

economic displacement, inequality is a big theme that should be taken into consideration. How AI affects different economic classes is going to come into play for almost every company

the same with biased hiring, medical and financial outcomes. What I talked about on the on the last slide around high-risk areas that have real strong implications in their outcomes.

The third is invasion or exploitation of consumer privacy. Our consumer private data is a right that we have as a private consumer, and it needs to be protected in accordance with consumer privacy laws like GDPR and CCPA when it is run through AI applications.

Next is the widespread and sophisticated cybersecurity campaigns that can play into this. These are very, very powerful tools now in everyone's hands, thinking about your vulnerabilities, thinking about the vulnerabilities you might create, or opportunities you might create for other parties is an important consideration of your job, thinking about responsible AI

Misinformation and fake news - both the disclosure of AI when used is important, but also watermarking and making sure that anything that is AI generated is understood in the context as a “non-real” piece of content that a machine has helped generate.

And then finally, disclosure of AI interactions and content to understand when you're dealing with a chat bot or to deal with dealing with an AI output is important from a societal perspective.

When we look at these risks and we look at the societal potential or impacts of them, we really come back to responsible AI as the key framework for managing these that you should take into consideration within your own organization and your own role. Now, responsible AI is simply a framework of principles for developing and deploying artificial intelligence in a safe, trustworthy and ethical way. The goal of responsible AI is to increase transparency and reduce issues such as AI bias within responsible AI, there are six core principles we are going to talk about and help you understand. These principles are fairness, reliability and safety, privacy and security, inclusiveness, transparency and accountability. And while we're going to dive into each of these deeper and it's a somewhat complex subject for some what I want everyone to understand as they get through this tutorial is that most responsible AI comes back to common sense. How, on a regular basis do I apply the commonsense principles we should be applying to almost anything we do to this new technology? And you'll they'll become evident as we talk through each of these. So, while a complex subject in execution, please keep in mind that it's very fairly straightforward and helpful, but it has to be done in a diligent and comprehensive way within your organization to be successful.

Let's talk about each of these subjects.

First, we're going to talk about fairness inclusive and inclusiveness, because these go these principles go hand in hand. The Edict of fairness and inclusiveness is straightforward, AI systems should treat everyone fairly and avoid affecting similarly situated groups of people in different ways. Particularly important to this is the inclusion of the inclusive data sets and testing to ensure and facilitate fairness as humans, we have a lot of biases built into what we do and how we act as part of our of our own DNA. Our goal with AI, though, is to ensure that the negative ramifications of bias don't seep into the AI applications and outcomes that we're producing, this can be particularly harmful in those high risk categories I talked about, but it also isn't particularly could be particularly harmful because of the amplifying effect of AI. AI is an automated environment that moves quickly. Small miscalibrations up front can lead to large discrepancies on outcomes, and so we need to be diligent, diligent on fairness and inclusiveness from the start, when dealing with AI on the front end of this, on recommended practices, many of your model efforts need to make a firm effort to have fairness inclusion in the model going fairness should be a part of your goal and performance testing. On testing itself, you should be looking at unfair bias testing, making sure that your model, ops and DevOps teams are focused on that and monitoring it and can report on it properly.

The third area of best practice is using representative data sets. This is the old adage that garbage in, garbage out. Is always applicable when doing when you dealing with data. The same is here from a bias and fairness perspective on, you know, biased data, unfair data, particularly leading to outcome recommendations of such will lead to models that tilt in that direction. So having an understanding of that and making sure that your data sets are representative up front is extremely important. I will also go back in this as a final recommended practice and say this high risk categories we talked about financial decision making, hiring decision making, making and HR healthcare decisions are all extremely high risk use cases in particular village, vigilance has to be applied to those as you implement AI in your company.

Next up is reliability and safety. It is extremely important that we, as we build out AI, build trust alongside it, and it's critical within that trust building that AI systems operate reliably, safely and consistently. They should be able to do what they've been designed to do, and when presented with unanticipated use cases or experiences, they should still operate in a logical way towards a non-harmful conclusion or output.

It should be very clear on recommended practices that rigorous, expected and unexpected use case testing should be done on any models or applications prior to wide scale launch. The same system should have complete monitoring and retraining built into them as you move forward, you should understand at all times what's happening within your models, within your applications, and make sure that those are those, those results are reported properly as you move forward with the adoption of AI.

Next is privacy and security. This has been a hot button topic in many areas of technology adoption: the use of consumer data, private data, or in case of healthcare, PII and other data, is an extremely important area to take into consideration and conduct operations appropriately as you move forward with AI now with AI, privacy and data security require close attention, because access to data is essential for AI systems to make accurate, informed predictions and decisions about people. AI systems must comply with privacy laws that require transparency about the collection, use and storage of data and mandate that consumers have appropriate controls to choose how their data is used. What this practically means is that the laws and regulations around data privacy like GDPR and CCPA, apply as much to AI as in any other use case, and should be taken into consideration as you move forward. As best practices, making sure that you have universal AI high level security standards is a must. AI should be protected from inside security areas, just as you do with other technology and applications. AI model privacy safeguarding is absolutely important as well. You should be on the lookout for areas like data injection and other exploit capabilities of cybersecurity threats. Data privacy transparency is going to continue to be key here, just as it is in your other use cases for consumer data. And then find Finally, you should have a clear understanding of applicable privacy laws. This is an area where it “rides shotgun” with data privacy and security, and you need to understand all the laws, especially the rapidly changing ones that are still going on at a global level as it relates to both your business and your category.

Next is transparency, when AI systems help inform decisions that have tremendous impacts on people's lives, it's critical that people understand how those decisions are being made, how those outcomes are being generated by an AI system. A crucial part of transparency is interpretability, the useful explanation of the behavior of AI systems and their components, just like you can look under the hood of a car or another machine, it should be easy and obvious to go into an AI model or application and understand how it's working and why it's generating the outputs that it is. So for recommended practices, you can look at design models for interoperability and making sure that's built in upfront in the design period of a project.  You need to communicate expectations on these projects to AI users, the expectations of how it's going to be built, trained, implemented, and then the outputs that will come from a project. And the third recommended best practice is testing, rigorous, repeatable testing to make sure that no variances start to develop within the models or the applications as you move forward and move towards broader adoption.

The last principle is accountability, so the people who design and deploy AI systems must be accountable for how their systems operate. Organizations should draw upon industry standards to develop accountability norms, and these norms can ensure that AI systems aren't the final authority on any decision that. Affects people's lives, the gold standard is the best practice is human machine oversight together, meaning a lot of testing should happen at a digital or machine level, but human oversight at every step is critical to make sure that the machine standards or outcomes are correct and vigorously checked against second area of best practices here is system measurement, ensuring that you measure the complete system, not only its components, to make sure that end to end, it's operating as it should. And then the third best practice is comprehensive organizational reporting, all eyes, like I said, up front, should be on this. This is a team sport from the from the top down, and so everyone in an organization should understand what is happening here and be able to hold accountable the teams and stakeholders working on these projects.

Now that we've covered responsible AI as a framework, let's talk about responsible AI by design within an organization. So,4 when we talk about responsible AI by design, we're really talking about three levels of governance for any organization. At the bottom of this is a little bit of what we talked about, data and model governance. How are we testing our models as we build them for for issues like bias? How are we making sure that they are transparent? How are we holding people accountable within these frameworks? A lot of this lives, first and foremost, in that model governance, side to the side of this is two arrows pointing in called threat detection. This is about reliability and security. So what are the threats that we need to be aware of to protect our AI so that it's behaving properly at all times and has been monitored and confirmed as such. The last level of governance is the organizational governance. How are we bringing that up through our organization to make sure that we've got comprehensive views of what's going on and the processes in place to manage AI responsibly across all groups and functions.

When we look at responsible AI by design, we really look at four critical lines of defense to help make sure that this is being done properly within a responsible AI framework. Now not all of these have to be done at the same time. Many organizations will implement them sequentially or in parallel. But these are some of the important things an organization should be thinking about to make sure that they've implemented responsible AI properly. The first piece of this is comprehending it's extremely important to first understand the space. You should understand the best practices in AI and responsible AI, the laws and regulations as they develop and move forward, and the governance frameworks like the NIST AI RMF (risk management framework) that govern a lot of the work that goes on in responsible AI and its implementation.

Next, once you understand and comprehend the space, you can begin to track it. The next line of defense on tracking is straightforward. You need to understand what's happening and track that and audit it as it occurs within your organization, setting up appropriate AI project tracking is important, understanding the risk associated with projects, and applying risk assessments according to those governance frameworks like NIST are going to be important so you can report back the relative risk that's in place and understand how to plan against it.

Third IP provenance tracking. Much of the much of the law and regulation on the book today doesn't always take into account the unique aspects of AI, given unsettled case law and much of the patent law doesn't recognize machine generated innovation. So having a firm understanding of IP provenance, tracking what you've done, how you've done it, and why you have legal rights on either a copyright or trademark basis or on a patent basis, to own the IP you're generated, generating with the assistance of AI is critical at this stage, as case law comes to be settled over the next few years.

And lastly, governance reporting, the ability to report across the organization, so everyone understands where you sit and where you're progressing on these, on these topics. Now, once you're tracking these, these items, you can start to set up control institutions and frameworks. It's going to be important to have oversight institutions, someone who leads point on these. Focus these issues like an AI Director to have broader, inclusive AI committees that take into consideration the viewpoints of folks across the organization and the people who are affected in communities affected by AI outputs. You're going to need to have AI use case policies. What are the terms and conditions by which the organization feels comfortable with employees, contractors, third parties implementing AI across their organization. On their behalf, you're going to need to be doing training on responsible AI, such as these type of tutorials, to make sure that everyone truly understands the fundamentals of responsible AI and how to govern it go. And then lastly, third party risk management. Much of the work your organization and others will be doing is through third party tools, software services. So not only understanding the models you're building in your organization and applying governance, but also understanding how third parties that you work with are doing the same. And then lastly, once you've got an understanding of the space you're tracking it, you've got control institutions in place, you can really manage this moving forward as an organization, setting up AI project approvals and making sure lines of command are clear and understood across the organization for the risk assessments you've done, setting up risk mitigation plans to make sure that you've got a plan for reducing risk across the projects that are ongoing, incident management, determining incidents as they arise, pulling together the right teams and then reporting back on how incident mitigation is going, then AI model testing at the model ops and DevOps levels on things like bias and transparency, and then making sure you have legal and IP compliance plans implemented across the org.

Now with those best practices and those four lines of defense, there was a lot to do, but as a starting foundational point, there are five key things that your organization, or any organization, should be looking at as best practices to implement as soon as possible and to continue to refine as they move forward.

The first of these five is setting up that AI committee to make sure that responsible voices are brought together on a regular basis to drive forward your AI governance and to make sure that it's inclusive of those voices that need to be there and drive it forward alongside the rest of the organization.

Second, issuing a sensible AI policy. No one AI policy template makes sense for every organization. Every organization needs to understand its risk thresholds, how often it's dealing with high-risk categories versus low risk categories for AI usage, and make sure that they issue a policy that makes sense for them, but every organization should have a AI policy that every employee understands or contractor understands, and has attested to that understanding, much like many of the other policies across an org.

The third thing every organization should be doing is inventorying its AI projects. This gets back to reporting and transparency. Everyone at the company should understand all the AI projects that are underway, both at a model building level as well as an application level, and be able to report back on the key components of that which include that project's mission, risk assessment and risk levels as well as its progress and goaling. What is it setting out to accomplish on behalf of the company?

The fourth area as a best practice is establishing AI project approval criteria, making sure that the right responsible leaders are evaluating each project and approving it before it moves forward. This can be as simple as writing a summary of an AI project and its key components and risks and circulating that through an established chain of command, but at any stage, a documented approval path must be done to make sure there's accountability through the organization on the things that move forward and get implemented, implemented.

And then lastly, these activities should be managed in a centralized system so that everyone has access to it and understands where it lives, and it's easy to find and discern the materials and other things that are outlined here.

If you do those five things, you and you apply responsible AI properly, there should be a coherent responsible AI plan that covers your AI governance across an organization and sets you up for success as you move forward with AI and its adoption.

Hopefully this tutorial has given you a solid background on AI governance, the importance of it, how it applies to you in your role in this organization, and the importance of the entire organization coming together around it as part of the DNA and cultural structure at the organization. And hopefully, we've given you also the tools, both at a framework level, with responsible AI, to approach this subject, and then a practical level, the tools to execute against this through the implementation of best practices like AI policies, committees, inventorying and processes that are important to making sure you get this right at every step of the way.

Thank you again for your time. Hopefully this tutorial has been helpful, and we look forward to seeing companies like yours accelerate their AI through the use of responsible AI today.



bottom of page