Cybersecurity & AI: Q&A with Connor Farry
Back To News
Glen Echo Group Director Connor Farry explains the cybersecurity opportunities and challenges posed by AI
Artificial intelligence is not a brand new technology, but with the introduction of generative AI tools such as ChatGPT, the public now has the ability to integrate AI into their daily lives. This comes with both potential benefits and challenges. Our clients are excited about how this technology might power future innovation. At the same time, our cyber experts are concerned that major new platforms are rushing to market, without the proper security in place. Today, we are looking to one of our own―Glen Echo Group Director Connor Farry, a key player on our cybersecurity team―to help explain the complicated relationship between AI and digital security.
As a director, Connor assists a range of cybersecurity clients with press relations, public affairs, policy work and advocacy. He has witnessed firsthand the tension between cybersecurity and cutting-edge technologies, including AI:
“Artificial intelligence is being implemented into major search engines and social media platforms, all before cybersecurity is seriously considered. We hear the same message from many of our experts – don’t implement something without securing it first,” he reflected.
There are also serious concerns that malicious hackers could exploit this technology for political and financial gain. Cybercriminals have already launched sophisticated attacks using AI to create realistic phishing email campaigns, all while utilizing the technology to pump out more drafts than one human ever could. AI technology is also being used to guess sensitive passwords and even mimic a loved one’s voice over a voicemail to scam victims into sharing their personal data.
Aside from its challenges, artificial intelligence has the potential to improve security outcomes. Just as cybercriminals can exploit AI for malicious activities, security experts can utilize the technology to improve cybersecurity, and potentially detect hacks before they’re identifiable by security teams. If implemented correctly, this is a tool that could really benefit the security field and assist cyber-defenders in identifying threats before they materialize.
Our clients are paying close attention to recent advancements in artificial intelligence. Experts are identifying some of the biggest challenges and opportunities as they arise – here are just a few examples:
- Researchers have expressed concerns over data scraping, which is being used by large AI models to collect vast swaths of data from the web―including copyrighted content, graphic images, photos, videos and creative writing. Users of language models like ChatGPT are then technically using the work of thousands of artists, writers and photographers without any consent or attribution. Experts such as UC Berkeley’s Center for Long-Term Cybersecurity’s AI Policy Hub (CLTC) Postdoctoral Scholar Hanlin Li have raised concerns about these processes and many agree that more regulation is needed to establish comprehensive data collection mechanisms that are fair to creators and users alike.
- Girl Security, a non-profit that works to prepare girls, women, and gender minorities for a career in national security, focused a weeklong training on AI earlier this year, recognizing the role this technology will play in cyber and national security strategies moving forward. Glen Echo Senior Associate Lama Mohammed facilitated a training on "AI 101: Decoding the Hype Behind Artificial Intelligence," which you can learn more about here.
“We've all heard some pretty wild fears about AI that end up sounding like plots from a sci-fi movie. While some of my top concerns don't include computer intelligence overriding human commands just yet, AI does pose some major potential challenges to U.S. security in the coming years,” Connor said, emphasizing that AI is likely to have an enormous impact on the 2024 United States presidential election.
"As AI deepfake technology becomes even more advanced, our adversaries looking to meddle in next year's presidential election will flood the internet with fake videos, sowing distrust in voters' minds. This is certainly something that keeps security experts up at night," he warned.
Connor advised that major social media platforms, companies and security officials in the federal government need to be ready for a potential AI-centered disinformation campaign around the 2024 election. These tools make it far easier for foreign adversaries to target American voters and the security community needs to have all the resources necessary to prepare.
Speaking of resources―for those looking to stay up to date on evolving security and AI issues, Connor recommends daily briefings like POLITICO’s Digital Future Daily and the Washington Post’s Cybersecurity 202. Arming ourselves with knowledge about the ever-changing tech policy landscape is our best defense against the pressures to act and react. Once we understand, we can deploy the tools and regulations at our disposal, rather than giving a knee-jerk reaction.
Our clients are at the forefront of the discussion that will shape the use of AI over the next decade and beyond. Stay tuned as we wrap up our Summer of AI series and explore some of the questions policymakers are asking about this technology. Have questions of your own? Reach out to us.