top of page
c837a6_3f53cdbff83b47b19a97bd35c767333c~mv2.webp

AI & Politics: The 2024 US Presidential Election and the future of AI Regulation

By Abigail Walters

Fact Checked by Bart Layton, CEO - AI Guardian

As the upcoming presidential election draws near, the discourse surrounding Artificial Intelligence regulation is gaining unprecedented significance. AI technologies are becoming increasingly important in various aspects of daily life and business operations, and the need for thorough regulatory frameworks has never been more pressing. The current state of AI regulation is in the early stages of development worldwide, as artificial intelligence techniques seem to evolve overnight. The outcome of the upcoming election will have a major impact on the future of AI regulation for business owners and consumers alike.


The results of this election pose a critical concern for the fate of AI for both voters and the tech industry. With policy decisions on the horizon that could shape the future landscape of artificial intelligence systems development and deployment, understanding the stakes and implications of these regulatory choices is essential. As candidates outline their positions on AI oversight, stakeholders across sectors must grasp how these policies might impact innovation, security, and ethical standards.

 

Current Landscape of AI Regulation

The current landscape of AI regulation is complex and evolving, shaped by a mix of existing frameworks, historical decisions, and ongoing legislative efforts amidst a rapid flow of generative AI innovations and advancements. At the moment, regulation for the use of artificial intelligence in the United States varies widely across states, with no single comprehensive federal law governing the technology. Currently, various agencies and state-level regulations address specific aspects of AI, such as data privacy under the California Consumer Privacy Act (CCPA) and algorithmic transparency initiatives.


Globally, AI regulation is equally inconsistent, with the European Union taking a leading role through its proposed Artificial Intelligence Act, which aims to establish a unified regulatory approach to AI across member states. Whereas in other places, like China, the regulation seems focused more on development than risk assessment. Historically, regulatory outcomes have had significant impacts, such as the implementation of GDPR in Europe, which has influenced data protection practices worldwide.


Key regulatory decisions, such as the Federal Trade Commission's actions on deceptive artificial intelligence practices, have set important precedents for how AI systems are scrutinized and managed. These decisions collectively shape how artificial intelligence systems are developed, deployed, and monitored. Given how early AI regulation is in terms of development, the upcoming president has the potential to heavily influence the future path of how we regulate artificial intelligence.

 


Significance of the Upcoming Election for AI Regulation


a symbolic representation of the US Presidential election and its impact on AI Regulations

The Candidates


Kamala Harris (Democratic Candidate)

Democratic candidate Kamala Harris has articulated a strong and proactive stance on the regulation of Artificial Intelligence, underscoring the necessity for comprehensive frameworks designed to protect consumer data and prevent the misuse of artificial intelligence technology. She is following in Biden’s footsteps, continuing to support the executive order set forward by Biden in 2023, which established heavy government involvement in the development of AI, and further restrictions on the uses of AI by consumers. She plans on developing more tools to test the ethicality and function of any newly developed AI systems. She has also mentioned providing more support for the AI Safety Institute at NIST, where these tests and tools would be used. Her position reflects a continuing commitment to establishing regulatory measures that address the ethical and security challenges posed by AI advancements.


Harris advocates for policies that not only enhance transparency and accountability in AI systems but also ensure that these technologies are developed and deployed in ways that respect individual privacy and prevent potential abuses. It is likely that if she were to get elected, she would expand on the plans set forth by Biden, resulting in even further restrictions than what is currently in place.


At the DNC, there were references to creating an AI bill of rights, which would focus on AI decision-making and privacy protection. It also promoted competitive markets in the ever-expanding world of AI. Despite this focus on regulation and slowing down unregulated AI development, she has also encouraged the government to start recruiting more AI professionals to work for the US government to better understand the potential concerns of the AI market.

 

Donald Trump (Republican Candidate)

Republican candidate Donald Trump's campaign advocates for a business-centric approach to Artificial Intelligence regulation, prioritizing innovation and economic growth over extensive regulatory oversight. Trump’s position highlights a commitment to fostering a favorable environment for technological advancement, emphasizing that minimal regulatory constraints are essential for maximizing AI's potential and driving economic expansion. His approach aims to reduce bureaucratic barriers, believing that a less restrictive regulatory framework will enable businesses to thrive and accelerate technological progress.


His focus seems to be on expanding AI use in other aspects of the government, like the military. As a way to facilitate this expansion, he plans on generally deregulating AI, which can provide more freedom for those who intend to integrate AI into their businesses. This goes hand in hand with his running mate J.D. Vance, who prefers a hands-off approach to AI.

His main focus on AI regulation is ensuring that large tech companies don’t create an environment that slows down the overall development of new AI technology. This is likely due to his associations with the venture capital community, who are heavily invested with AI startups. He fears that any regulations put in place would hinder smaller companies from beginning to develop new AI technology, and in his own words “entrench the tech incumbents that we actually have and make it actually harder for new entrants to create the innovation that’s going to power the next generation of American growth”.


This duo of anti-regulation advocates is not fully aligned with the main body of the GOP, who fear the potential risks of AI. Many Republicans, especially the voters, are wary of AI’s potential to disrupt jobs and invade privacy; however, their concerns do fall in line with Vance’s over AI having concentrated power in the hands of a few tech companies. They express concerns about AI’s societal implications and the risks it poses to personal freedoms and traditional values. Despite this disagreement within the GOP, the candidates both hold extremely open opinions towards AI, and hesitate at the idea of regulating it.

 

Possible Outcomes of the Election

As the landscape of Artificial Intelligence continues to evolve, the debate over the necessity and impact of stricter AI regulations remains a focal point in both policy circles and the tech industry. Advocates argue that enhanced regulatory frameworks are essential for safeguarding consumer interests, ensuring ethical AI use, and protecting data privacy. Conversely, critics caution that such regulations may impose significant compliance costs on businesses and potentially slow the pace of technological innovation (California SB 1047 has come under fire for its classifications and requirements of generative artificial intelligence system development). As the chances that further AI regulations may be passed after the election, it is critical to examine the benefits and the drawbacks of such an outcome.

 

Stricter AI Regulations

Stricter AI regulations have the potential to improve consumer protection by setting clear standards for how AI systems handle personal data and interact with users. By mandating transparency and accountability, these regulations can help prevent deceptive practices, promote trustworthy AI systems, and ensure that AI models are used in ways that prioritize user welfare. Enhanced consumer protection measures can build trust in AI systems and mitigate risks associated with misuse or unethical practices.


As AI systems increasingly rely on vast amounts of personal data, robust regulations can play a crucial role in safeguarding data privacy. Stricter regulations can enforce stringent data handling and protection protocols, reducing the likelihood of data breaches and misuse. By establishing clear guidelines for data collection, storage, and sharing, regulations can help ensure that consumer information is managed responsibly and securely.


Regulations that emphasize ethical considerations can promote the responsible development and deployment of AI technologies. By setting standards for fairness, non-discrimination, and accountability, stricter regulations can address concerns about biased algorithms and ensure that AI systems operate in a manner that aligns with societal values and ethical norms. This can contribute to a more equitable and just application of AI models across various sectors.

But despite these benefits, there are also some drawbacks.

One of the primary concerns associated with stricter AI regulations is the potential for increased compliance costs. Businesses, particularly startups and smaller enterprises, may face significant financial and administrative burdens as they work to meet new regulatory requirements. These costs can include expenses related to legal consultations, implementation of new processes and technologies, and ongoing monitoring and reporting. For some companies, these added costs could be a barrier to entry or limit their ability to scale. 


Critics argue that stringent regulations might slow the pace of technological innovation by creating more rigid frameworks that limit experimentation and development. As companies navigate complex regulatory landscapes, the time and resources devoted to compliance could detract from their focus on advancing AI technologies. This potential slowdown in innovation could impact the industry's ability to quickly adapt to new challenges and opportunities, potentially stifling progress in a rapidly evolving field.

 

Case Studies


a conceptual illustration of the EU AI Act

International Examples - The EU AI Act

International examples of AI regulation offer valuable insights for shaping U.S. policy, particularly from countries that have implemented stringent AI frameworks. The European Union’s Artificial Intelligence Act (EU AI Act) is a notable example of such regulation, providing a comprehensive approach to managing AI technologies. By examining the lessons learned from the EU’s experience and other international practices, the U.S. can better inform its own regulatory strategies.

 

The EU AI Act represents one of the most advanced attempts to regulate use of artificial intelligence on a broad scale. Recognizing that it is the human defined objectives that drive the decisions, recommendations, and predictions of a machine system, ultimately influencing real or virtual environments we aim to protect, the EU AI Act categorizes AI systems based on risk levels pertaining to their use (not just the systems themselves) - ranging from minimal to unacceptable risk - and imposes varying requirements accordingly. High risk AI systems, such as those used in critical infrastructure or legal decisions, face stringent requirements, including extensive documentation, risk assessments, and human oversight. Lower-risk systems are subject to less rigorous rules but must still comply with transparency and accountability standards.

 

A hallmark of the EU AI Act - which other AI regulations are replicating more and more - is the EU’s tiered approach to the regulation of AI systems. This emphasizes tailoring requirements to the level of risk associated with different AI applications. This can categorize different uses of AI into a system, ensuring that regulations for AI models built for scenarios like education are thoroughly vetted and monitored. Ensuring that high risk AI systems, which have a significant impact on daily life, don’t end up interfering and potentially putting its users in danger.  This model helps balance innovation with safety, ensuring that high risk AI systems are closely monitored without stifling lower-risk use of AI.

 

Another effective tactic of the EU AI Act is that it mandates transparency in AI systems, requiring clear documentation and explanations of AI algorithms and their decisions. This transparency is crucial for building public trust and ensuring that AI systems are used ethically and responsibly. And given that a good portion of the EU AI Act aims to soothe public worries about the privacy and safety of generative artificial intelligence and its invasion into their lives, this transparency mandate is effective at accomplishing that goal.

 

The Act reinforces data protection principles, aligning with GDPR to safeguard personal data. This focus on privacy ensures that AI systems handle data responsibly, mitigating risks of misuse and breaches. The act also requires human oversight, further mitigating risks. By requiring human oversight for high-risk AI applications, the EU aims to prevent harmful outcomes and ensure that AI decisions are subject to human judgment and accountability.

 

Lessons Learned

The U.S. can benefit from more broadly implementing a risk-based approach to AI regulation, similar to the EU’s model. This would involve categorizing AI systems by their risk level and applying appropriate regulations accordingly. Such a framework would help manage high risk AI systems effectively while allowing room for innovation in lower-risk areas. Emulating the EU’s emphasis on transparency could improve trust and accountability in AI systems in the U.S. Requiring clear documentation and explanations of AI processes would help users and regulators understand and scrutinize AI decisions, fostering ethical use and reducing the risk of misuse. Incorporating robust data protection measures similar to those in the EU AI Act would enhance privacy and security in AI systems. Aligning with principles from GDPR could provide a strong foundation for safeguarding personal data and addressing privacy concerns.

The U.S. can also learn from the EU’s requirement for human oversight in high risk AI systems. Implementing similar measures would ensure that critical AI decisions are subject to human review, reducing the potential for harmful outcomes and maintaining accountability.

 

The experiences of the European Union with the AI Act offer valuable lessons for the U.S. in developing a comprehensive and effective AI regulatory framework. By adopting a risk-based approach, enhancing transparency, strengthening data privacy, and ensuring human oversight, U.S. policymakers can create a balanced regulatory environment that fosters innovation while addressing the ethical and security challenges posed by AI technologies. Learning from international examples and integrating these insights into U.S. policy can help shape a more informed and responsible approach to AI regulation. But the EU AI Act has just begun implementation last month (August), so it remains to be said whether or not the legislation will be effective or not. The United States can certainly learn from the modeling and concepts from the EU, but it remains to be seen whether or not the implementation and actual use of the Act will be something to model after as well.

 

Future Outlook


a conceptual illustration of the future of AI regulations

Predictions for AI Landscape Post-Election

It is hard to predict the future of AI, as the public opinion on it seems to turn on a dime every week, but it is possible to see long term trends. Looking at international examples like the EU AI Act, it is easy to see that there will be general stricter regulation regarding AI and its development. General concern over privacy and safety will push lawmakers towards becoming more involved in the moderation of AI. This isn’t to say that Post-Election the landscape won’t be very different depending on the President elected. In the long term, there are likely to be some consistent trends, but in the immediate Post-Election landscape, the status of AI regulation is bound to see some heavy variation.


The first question regarding the outcome of the election is how it will affect the pace of innovation. For AI systems to be regulated, it first needs to be developed. And if the result of the election changes the pacing of AI development, it’ll also impact the need for more legislation. When it comes to Vice President Harris’s campaign, it seems that if she were elected, the pace of AI development wouldn’t change much, or even slow down a bit. This could pose some concerns for smaller companies who wish to get in on the AI market, and generally slow down the industry as a whole. On the flip side, given Trump’s more open approach to AI, his wishes for deregulation could create a boom in the industry in the initial years following his potential election. This heightened pace of innovation could lead to some interesting developments for the business world, both for investors and users.


On the topic of investors, the concerns over the economic impact of the election on the AI market cannot be ignored as well. Currently, the AI market seems to be flourishing with investors and ideas, so Harris’s plans to slow down production for the sake of regulation might slow down the market as well. This could create disinterest in the market by many investors, as with heavy regulations, the potential for rapid development and economic growth would become practically nonexistent. But Trump’s ideology regarding AI feels like a high-risk high-reward scenario for the AI market. With a free range to develop without regulatory concerns, there is endless potential for investors and startups. Unfortunately, there is also a risk of deregulation creating public distaste for AI systems. If there is no way to ensure privacy and security, there might be pushback against all the startups. In the short term, Harris’s caution seems more concerning for the economic impact on the AI market, but in the long run, Trump’s risks might do more damage.


Another important aspect when it comes to AI is the global competitiveness in development. With countries like China, who have stated their dedication to investing copious amounts of time into their AI systems, it feels incredibly important for the US to stay in front of the pack. There is some fear regarding AI use in the military and talk of ensuring that we do not fall behind countries like Russia and China in the creation of military focused AI systems. This is where Trump’s approach to AI seems to flourish, as he has focused heavily on the use of artificial intelligence for militaristic reasons. That said, Harris' plans seem similar in many ways to that of the EU AI Act, which notably does include exemptions for use of artificial intelligence in military and national security applications. While slower uptake of regulation may enable AI innovation, the piecemeal adoption of varying policies across states hinders it, leaving regulated parties to trudge through a morass of inconsistent requirements from many authorities. If one comprehensive AI bill were to be passed by the US Federal Government, it could either stifle opportunity for innovation and global competitiveness or amplify it - the difference lies in its details and implementation. For such a comprehensive AI law to help more than hurt, it must bring consistency to requirements rather than merely add on top of what is there (and coming).


Long-term AI Regulation Impact

In both scenarios, the first year or two of presidency is likely to pose some interesting changes in the landscape of AI development and regulation. But for both scenarios, ultimately the long-term effects on the AI world are dependent on the regulations and responses that occur in other parts of the world. While both candidates have heavy influence over the future of how we will regulate AI in the United States, the wider landscape of AI technology regulation is what will determine the overall long-term results. 


How AI Guardian Plays a Role

Regardless of which direction the election goes, it will be essential for companies to have a comprehensive understanding of the use of artificial intelligence across their businesses so they may ensure compliance with all regulations and communicate with confidence to executive leadership and board members as well as regulators.


An advantage for businesses who decide to use AI Guardian’s services will be having a fully comprehensive way to monitor all the chaotic changes that are about to occur in this uneasy landscape of regulations. AI Guardian monitors not only approved regulations, but proposed ones as well, which can help businesses stay ahead. This can take much of the stress out of using AI in such an unsure time, as AI Guardian ensures that businesses are getting the information they need to comply with regulations before they impact the business. Using the AI Guardian app offers security to executive leadership, board members, regulators, and everyone else involved in ensuring compliance. Having dedicated regulation tracking during a time of such uncertainty is an asset, with value that cannot be overstated in the current AI landscape.

 

Conclusion

The outcome of the 2024 presidential election will be pivotal in determining the near future of AI regulation in the United States. As AI becomes increasingly integral to the business world and the rest of the world, the need for a cohesive regulatory framework is critical. Democratic candidate Kamala Harris advocates for stringent regulations to ensure ethical use and robust consumer protection, potentially leading to tighter controls if elected. Conversely, Republican candidate Donald Trump, along with J.D. Vance, supports a deregulatory approach aimed at fostering innovation and economic growth, which could lead to a more volatile regulatory environment.


The immediate post-election period will bring significant shifts in AI regulation, influenced by the elected administration's policies and international trends. While stricter regulations could enhance consumer protection and privacy, they may also impose higher compliance costs and slow technological progress. Conversely, a more deregulated environment might spur innovation but raise concerns about security and privacy.


In this uncertain landscape, businesses can benefit from services like AI Guardian, which offer crucial support in ensuring regulatory compliance and maintaining user trust. The risks in noncompliance grow as more legislation gets passed, and in an election that is very likely to shift the landscape of the country’s regulations – for AI and much more – the need for support grows more and more critical.


Ultimately, the future of AI regulation will be shaped by a complex interplay of domestic policies and global trends, highlighting the importance of adaptability and informed decision-making in navigating the evolving AI landscape. But for both the short-term volatility, and keeping an eye on the ever-changing world, AI Guardian provides a safeguard for businesses both in this time of change, and in the long-term trends of the AI legislative landscape. 

コメント


bottom of page