close
close

Ourladyoftheassumptionparish

Part – Newstatenabenn

On November 5, AI will also be at the polls
patheur

On November 5, AI will also be at the polls

Ballot with the word AI written on it entering the polls.The choice Americans make this November will determine whether they will continue to lead a collaborative effort to shape the future of AI according to democratic principles. Illustration: edited by Erik English; original from DETHAL via Adobe.

Artificial intelligence represents one of the most important technologies of our time, promising enormous benefits while posing serious risks to the nation’s security and democracy. The 2024 elections will determine whether the United States leads or retreats from its crucial role in ensuring that AI is developed safely and in line with democratic values.

AI promises extraordinary benefits, from accelerating scientific discoveries to improving healthcare and increasing productivity across our economy. But achieving these benefits requires what experts call “safe innovation”—developing AI in ways that protect American security and values.

Despite its benefits, the various risks associated with artificial intelligence are significant. Unregulated AI systems could amplify social biases and lead to discrimination in crucial employment, lending and healthcare decisions. The security challenges are even more daunting: AI-powered attacks could probe power grids for vulnerabilities thousands of times per second, launched by individuals or small groups rather than requiring the resources of nation states. During public health or safety emergencies, AI-based misinformation could disrupt critical communications between emergency services and the public, undermining response efforts to save lives. Perhaps most alarming, AI can lower the barriers for malicious actors to develop chemical and biological weapons more easily and quickly than without the technology, putting devastating capabilities within the reach of individuals and groups that previously lacked research experience or skills. .

Recognizing these risks, the Biden-Harris administration developed a comprehensive approach to AI governance, including the milestone Executive Order on the Safe and Trustworthy Development and Use of Artificial Intelligence. The administration’s framework directs federal agencies to address the full spectrum of AI challenges. It establishes new guidelines to prevent AI discrimination, promotes research that serves the public good, and creates new initiatives across government to help society adapt to AI-driven changes. The framework also addresses the most serious security risks by ensuring that powerful AI models undergo rigorous testing so that safeguards can be developed to block their potential misuse (such as assisting in the creation of cyberattacks or biological weapons) in ways that threaten public safety. These safeguards preserve America’s ability to lead the AI ​​revolution while protecting our security and values.

Critics who claim this framework would stifle innovation would do well to consider other transformative technologies. The rigorous safety standards and air traffic control systems developed through international cooperation did not inhibit the airline industry, but made it possible. Today, millions of people board planes without a second thought because they trust in the safety of air travel. Aviation became a cornerstone of the global economy precisely because nations worked together to create standards that earned public trust. Similarly, catalytic converters didn’t hold back the automotive industry: they helped cars meet growing global demands for both mobility and environmental protection.

Just as the Federal Aviation Administration ensures safe air travel, dedicated federal oversight in collaboration with industry and academia can ensure the responsible use of artificial intelligence applications. Through the recently published National Security MemorandumThe White House has established the AI ​​Safety Institute within the National Institute of Standards and Technology (NIST) as the US government’s primary liaison for private sector AI developers. This institute will facilitate voluntary testing, before and after public deployment, to ensure the safety and reliability of advanced AI models. But because threats like biological weapons and cyberattacks do not respect borders, policymakers must think globally. That’s why the administration is building a network of AI safety institutes with partner countries to harmonize standards around the world. It’s not about going it alone, but about leading a coalition of like-minded nations to ensure that AI is developed in transformative and trustworthy ways.

Former President Trump’s approach would be markedly different from that of the current administration. The Republican National Committee platform proposes “repeal Joe Biden’s dangerous Executive Order that hinders innovation in AI and imposes ideas from the radical left in the development of this technology.” This position contradicts growing public concerns about technological risks. For example, Americans have witnessed the dangers that children face due to unregulated social media algorithms. That’s why the United States Senate recently came together in an unprecedented show of bipartisan force to pass the Child Online Safety Law by a vote of 91-3. The bill provides young people and parents with tools, safeguards and transparency to protect themselves from online harm. The stakes with AI are even higher. And for those who think that establishing technological barriers will harm the competitiveness of the United States, the opposite is true: just as travelers began to prefer safer planes and consumers demanded cleaner vehicles, they will insist on reliable artificial intelligence systems. Companies and countries that develop AI without adequate safeguards will find themselves at a disadvantage in a world where users and businesses demand assurances that their AI systems will not spread misinformation, make biased decisions, or enable dangerous applications.

He Biden-Harris Executive Order on AI It establishes a foundation upon which to build. Strengthening the United States’ role in setting global AI safety standards and expanding international partnerships is essential to maintaining American leadership. This requires working with Congress to secure strategic investments for AI security research and oversight, as well as investments in defensive AI systems that protect the nation’s physical and digital infrastructure. As automated AI attacks become more sophisticated, AI-powered defenses will be crucial to protecting power grids, water systems, and emergency services.

The window to establish effective global AI governance is narrow. The current administration has created a thriving ecosystem for safe and trustworthy AI, a framework that positions the United States as a leader in this critical technology. Stepping back now and dismantling these carefully constructed safeguards would mean giving up not only America’s technological advantage, but also the ability to ensure that AI develops in line with democratic values. Countries that do not share the United States’ commitment to individual rights, privacy, and security would then have a greater say in setting the technology standards that will reshape every aspect of society. This election represents a critical election for the future of the United States. The right standards, developed in partnership with allies, will not inhibit the development of AI: they will ensure that it reaches its full potential in the service of humanity. The choice Americans make this November will determine whether they will continue to lead a collaborative effort to shape the future of AI according to democratic principles or hand that future over to those who would use AI to undermine the security, prosperity, and values ​​of our nation. nation.