Summer of AI 2.0: The Recap
Back To News
Is AI brat?
As long summer days wind down and policymakers come back from the August recess, one thing is clear: we are headed into a busy fall, and conversation around AI will only be amplified as we gear up for a historic U.S. presidential election. AI in political advertising is raising concerns about misinformation on a massive scale, which could impact not only voters’ perceptions of reality, but also the election results.
And once those results are in, that won’t be the end of AI policy ― far from it. Coming from the Bay Area, Democratic nominee Vice President Kamala Harris is no stranger to tech. Harris played a key role in implementing the Biden administration’s AI efforts, including an AI executive order establishing new standards for AI safety and security. If elected, she’s expected to continue working to mitigate short- and long-term risks stemming from AI tools, particularly those around harms from AI bias and deepfakes. The Republican candidate, former President Donald Trump, has said less about AI, but has backed the Republican National Committee’s official policy platform, which calls for revoking the order.
With November looming, the Glen Echo Group has been closely tracking AI policy developments all summer long. If you’re just tuning in to our Summer of AI 2.0, not to worry ― you can catch up on the latest below:
Glen Echo Group CEO Maura Corbett kicked off the Summer of AI 2.0 with remarks at the Future of Privacy Forum’s inaugural DC Privacy Forum: AI Forward and Annual Advisory Board Conference in Washington, DC. She reminded us of the enormous challenge of understanding ― and helping others understand ― complex tech policy issues with wide impacts:“It is critical, especially in to day’s fractured media environment, that those covering tech actually understand it. The media plays such an important role in shaping the larger narrative and the public’s understanding of any given subject. Emerging technologies ― like AI, the subject of so much discussion today ― are already changing our lives in so many ways. Education is key. How we communicate about new technologies matters immensely as far as building public trust, adoption and regulation.”
Databricks, a global data and AI company, released its 2024 State of Data + AI report, which analyzed usage data from over 10,000 customers on how they improved, implemented and utilized machine learning (ML) strategies for their data management needs.
TL;DR? Companies are eager to use AI internally and want to better understand it. Early adopters tend to be highly regulated industries (think: financial institutions and healthcare systems). More and more companies are optimizing off-the-shelf large language models (LLMs) with their own private data rather than using standard models.
Congress also took AI into consideration this summer, with the Senate Commerce, Science, and Transportation Committee holding a full hearing on “The Need to Protect Americans’ Privacy and the AI Accelerant,” featuring testimony from experts including Morgan Reed, president of ACT | The App Association and Udbhav Tiwari, Mozilla’s Director of Global Product Policy.
Tiwari’s comments focused on privacy legislation as the necessary first step in developing AI policy, while Reed focused on preemption and other provisions needed to protect small businesses. Witness testimony repeatedly emphasized that, as Tiwari put it, “maintaining U.S. leadership in AI requires America to lead on privacy and user rights.”
Head buried in the sand? Catch up quick on the latest AI News!
Here’s what people are talking about on AI and elections:
- The Federal Elections Commission (FEC) announced that it would not propose any new rules or take action on AI used in political advertising this year. This means that the upcoming U.S. presidential election will have to depend on voluntary actions by tech companies to combat political deepfakes. The decision raised concerns among tech advocacy groups, including the Future of Privacy Forum, which viewed it as a missed opportunity.
- As November looms, five secretaries of state sent a letter to X CEO Elon Musk urging him to fix the social media platform's AI chatbot after it shared misinformation about the upcoming presidential election.
Here’s the latest on AI proposals in development at the federal level:
- On Capitol Hill, the Bipartisan Senate AI Working Group released its long-awaited roadmap for AI regulation. The roadmap focuses on areas for future legislation, including supporting U.S. innovation, workforce privacy, intellectual property and national security.
- Reps. Gerry Connolly (D-VA) and Andrew Garbarino (R-NY) introduced a new bipartisan bill requiring federal agencies to designate Chief Artificial Intelligence Officers (CAIOs), as directed by President Biden’s Executive Order on AI.
- The U.S. Department of the Treasury published a new set of proposed rules to ban or require notification of certain AI and other technological investments in China that threaten U.S. national security.
As usual, the states seem to be leading the charge on AI policy. Here’s what people are talking about:
- California’s AI safety bill, SB 1047, has made the state ground zero in the fight over AI safety rules. The bill has seen fierce pushback from industry groups who insist AI policy should come from Congress and have greater specificity.
- Colorado named the members of its Artificial Intelligence Impact Task Force, a body created by the historic AI bill (SB 205) passed in May. The group must report recommendations to the Joint Technology Committee and the Governor's Office by February 1, 2025.
Another AI summer may be coming to a close, but we'll continue to follow the important issues surrounding AI into the fall and beyond. So stay tuned, and if you have questions, reach out to us.