The Existential Threat of AI?

There is currently a concerted effort to portray Artificial Intelligence developments as an extinction level risk to humanity, on par with a nuclear war and deadly pandemics. AI experts, namely leaders and CEO’s from Big Tech and adjacent industries, certain academics and billionaire backed philanthropy groups have all called for state intervention and regulation of their own industry. The mass media, quick to pick up on claims of the oncoming AI apocalypse, have regurgitated these claims from the experts, drumming up popular apprehension amongst the general public, elected officials and policy makers.

“My worst fears are that we—the field, the technology, the industry—cause significant harm to the world. I think that can happen in a lot of different ways” - Sam Altman, ‘Open’AI

“If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter” - Eliezer Yudkowsky, Machine Intelligence Research Institute

Looking at these presented grave dangers from industry insiders and leaders it appears to me that something does not add up. AI is purported to have the potential to be catastrophic for the human race yet AI development has not been paused. Quite to the contrary it has accelerated at breakneck pace as for-profit, Big Tech companies seek to cement and expand their market positions and develop ever more powerful AI models.

Two Visions of the Future of AI

There is a conflict playing out as to how AI is to be developed.

On one side there is the nexus of Big Tech, the Surveillance Capitalists and increasingly elements of immense State power which, as put eloquently by Zuboff, “constitute a sweeping political-economic institutional order that exerts oligopolistic control over most digital information and communication spaces, systems, and processes.” These entities would fashion themselves as a technological high priesthood, gate-keeping the development and deployment of AI as regulated, restricted and walled off behind their proprietary, nontransparent systems. This is not new. Large technology corporations and States have historically been opposed to open source and democratized technology as it challenges their institutional and monopoly powers within society.

On the other side, there are a growing number of individuals, companies (including some surprises such as Meta and IBM) and parts of civil society that reject this view and see open sourced Artificial Intelligence as necessary for innovation and safety through transparency, effective competition and security. As seen with LAION’s “Call to Protect Open-Source AI in Europe,” the touted benefits of open-source AI are security, competition and safety:

“First, open-source AI promotes safety through transparency. Open-sourcing data, models, and workflows enables researchers and authorities to audit the performance of a model or system; develop interpretability techniques; identify risks; and establish mitigations or develop anticipatory countermeasures. Second, open-source AI promotes competition. Small to medium enterprises across Europe can build on open-source models to drive productivity, instead of relying on a handful of large firms for essential technology. Finally, open-source AI promotes security. Public and private sector organizations can adapt open-source models for specialized applications without sharing private or sensitive data with a proprietary firm”

The rest of this blog post concerns itself primarily with the second point raised by LAION which, in my view, is the primary reason current Big Tech AI incumbents are pushing so heavily for regulation. I’m not saying attempted regulatory capture to erect barriers to entry and crush competition but…

Big Tech, Billionaires and Whole Lot of Lobbying

In the United States, Artificial Intelligence related lobbying has spiked dramatically in the past two years. As reported in CNBC, AI lobbying efforts spiked 185% from 2022 to 2023 with over 450 organizations participating in efforts to influence US Federal legislation. This spike corresponded with growing calls for AI regulation and the Biden administration’s push to codify such rules into law. A whole host of corporations and industries ranging from the expected Big Tech and AI to pharmaceuticals, insurance, finance, telecommunications and data brokerages are involved in these efforts. Even Disney is splashing their cash on the AI scene. As of February 2024 these entities have spent in excess of $950 million in lobbying efforts. But that is not all.

There are billionaire-backed networks of AI advisers that have “taken over Washington” as Politico so succinctly puts it. These networks are “spread across Congress, federal agencies and think tanks” with one of them essentially funneling Big Tech money “through a science nonprofit to help pay the salaries of AI staffers in Congress.” This blog post highlights two of these networks, however it must be acknowledged these are simply just two good examples of the increasing influence Big Tech is having on AI policy within the US Federal government.

The first organization of note is Open Philanthropy, primarily funded by Dustin Moskovitz: billionaire Facebook co-founder and CEO of Asana who also happens to be among the Biden campaign’s biggest donors. Through the Horizon Institute for Public Service, Open Philanthropy funds the salaries of key individuals working in key bodies responsible for AI rule making:

“Current and former Horizon AI fellows with salaries funded by Open Philanthropy are now working at the Department of Defense, the Department of Homeland Security and the State Department, as well as in the House Science Committee and Senate Commerce Committee, two crucial bodies in the development of AI rules. They also populate key think tanks shaping AI policy, including the RAND Corporation and Georgetown University’s Center for Security and Emerging Technology, according to the Horizon web site. In 2022, Open Philanthropy set aside nearly $3 million to pay for what ultimately became the initial cohort of Horizon fellows.” - How a billionaire-backed network of AI advisers took over Washington, POLITICO

The second group of note is a “rapid response cohort” of AI fellows (lobbyists) responsible for supporting “leaders in Congress as they craft legislation, in particular policies related to emerging opportunities and challenges with AI.” Run by the American Association for the Advancement of Science with “substantial support from Microsoft, ‘Open’AI, Google, IBM and Nvidia,” this cohort of lobbyists exerts considerable influence on Congressional AI rule making.

“Alongside the Open Philanthropy fellows — and hundreds of outside-funded fellows throughout the government, including many with links to the tech industry — the six AI staffers in the industry-funded rapid response cohort are helping shape how key players in Congress approach the debate over when and how to regulate AI, at a time when many Americans are deeply skeptical of the industry.” - Key Congress staffers in AI debate are funded by tech giants like Google and Microsoft, POLITICO

This immense lobbying effort is not limited to Washington, though due to the United State’s political system this is perhaps the best showcase of Big Tech’s influence in action.

Across the Atlantic, Rishi Sunak’s government, at first nonchalant about the increasing usage and development of AI has had the dangers of “existential risk pushed right up on the policy agenda” by key government advisors linked heavily to AI companies and Big Tech.

In the European Union, the recently passed “AI Act” was heavily lobbied by Big Tech. Lobbying efforts, such as those by ‘OpenAI’, were focused on watering down the ways in which the law would burden the company. In multiple cases, "‘Open’AI proposed amendments that were later made to the final text of the EU law." This is in seeming direct contradiction with the statements released by these companies about the need for regulation. Hypocrisy from Big Tech and their executives? Who would have thought?

“What they’re saying is basically: trust us to self-regulate,” says Daniel Leufer, a senior policy analyst focused on AI at Access Now’s Brussels office. “It’s very confusing because they’re talking to politicians saying, ‘Please regulate us,’ they’re boasting about all the safety stuff that they do, but as soon as you say, ‘Well, let’s take you at your word and set that as a regulatory floor,’ they say no.” - OpenAI Lobbied the E.U. to Water Down AI Regulation, TIME

It is undeniable that there exists considerable influence from Big Tech in Artificial Intelligence regulation and policy. Regardless of your views on these companies, history is littered with examples of large powerful corporations, Big Tech or otherwise, pursuing their profit incentive through legislative capture at the expense of common good. Why should these companies act any different?

“Tim Stretton, director of the congressional oversight initiative at the nonpartisan watchdog Project On Government Oversight, said it’s “never great when corporations are funding, essentially, congressional staffers.” He said the money from five leading AI firms, suggests an improper level of tech industry influence.” - Key Congress staffers in AI debate are funded by tech giants like Google and Microsoft, POLITICO

Sen. Dick Durbin (D., Ill.) remarked that he could not recall a time when representatives for private sector entities had ever pleaded for regulation. - OpenAI CEO Sam Altman Asks Congress to Regulate AI, TIME

Effective Altruism and the Focus on Existential Threats

The intense lobbying efforts on the part of Big Tech to influence the AI regulatory agenda appear to be heavily linked to “effective altruism” (EA): the controversial Silicon Valley ideology and movement that, amongst other things seems to “advocate policy that’s focused on the distant future rather than the here-and-now.” According to industry insiders the ideology is now “driving the research agenda in the field of artificial intelligence (AI), creating a race to proliferate harmful systems, ironically in the name of “AI safety.”

“Some of the billionaires who have committed significant funds to this goal include Elon MuskVitalik ButerinBen DeloJaan TallinnPeter ThielDustin Muskovitz, and Sam Bankman-Fried, who was one of EA’s largest funders until the recent bankruptcy of his FTX cryptocurrency platform. As a result, all of this money has shaped the field of AI and its priorities in ways that harm people in marginalized groups while purporting to work on “beneficial artificial general intelligence” that will bring techno utopia for humanity. This is yet another example of how our technological future is not a linear march toward progress but one that is determined by those who have the money and influence to control it.” - Effective Altruism Is Pushing a Dangerous Brand of ‘AI Safety’

The above mentioned organizations, such as Open Philanthropy, have significant ties the movement, with many AI thinkers seeing the existential AI threats that EA proponents push being “science-fiction concerns far removed from the current AI harms” that should be addressed, resulting in the “steering of policy conversation away from more pressing issues - including topics some leading AI firms might prefer to keep off the policy agenda.”

“The network’s fixation on speculative harms is “almost like a caricature of the reality that we’re experiencing,” said Deborah Raji, an AI researcher at the University of California, Berkeley, who attended last month’s AI Insight Forum in the Senate. She worries that the focus on existential dangers will steer lawmakers away from addressing risks that today’s AI systems already pose, including their tendency to inject bias, spread misinformation, threaten copyright protections and weaken personal privacy.” - How a billionaire-backed network of AI advisers took over Washington, POLITICO

Extensively Lobbied Regulations Benefit Incumbents

The Effects of Such Lobbying

With the influence of Big Tech AI incumbents and the ideas of ’existential threats’ evidently having a massive role to play in the drafting of AI regulation, one could expect policy proposals to benefit the interests of such parties.

So lets have a look at an example of a legislative proposal: namely one bipartisan bill proposed in the United States by U.S. Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO), Chair and Ranking Member of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. Within this proposed legislative framework the key takeaways are:

  • “The establishment of a licencing regime administered by an independent oversight body” which AI developers working on sophisticated general purpose models would be required to register with the body. In this context a licencing scheme is just another word for permission.
  • The above mentioned body would “ensure legal accountability for harms” through the requirement of AI companies being held liable when harms are caused through the use of their models.

This legislation echoes the type of regulation called for by the likes of ‘Open’AI in the May 2023 Congressional hearing on AI.

“he (Sam Altman) supported the creation of a federal agency that can grant licenses to create AI models above a certain threshold of capabilities, and can also revoke those licenses if the models don’t meet safety guidelines set by the government.” - OpenAI CEO Sam Altman Asks Congress to Regulate AI, TIME

This establishment of a strict liability regime based upon existential harms caused by AI models would ensure that only large, already established incumbent companies would have the financial and technical ability to comply with the law, while new startups and alternative non corporate structures (such as open-source) would face serious barriers to entry. Such a legislative proposal, if passed into law, would almost certainly pose limits, if not entirely restrict the legal development of open-source AI while leaving only the big, closed and proprietary models standing.

Opposition to the Current Regulatory Narrative

This is a view shared by a growing number of key AI figures, academics and technologists. Perhaps the most notable of these is Andrew Ng: Stanford Professor; machine learning teacher to ‘Open’AI CEO Sam Altman; co-founder of Google Brain and a globally recognized leader in Artificial Intelligence. He is of the belief that notions of Artificial Intelligence leading to the extinction of the human race is “a lie being promulgated by big tech in the hope of triggering heavy regulation that would shut down competition in the AI market” and a big proponent of open-source AI development.

“Andrew Ng said that the “bad idea that AI could make us go extinct” was merging with the “bad idea that a good way to make AI safer is to impose burdensome licensing requirements” on the AI industry.” - Google Brain founder says big tech is lying about AI extinction danger, Financial Review

I would highly advise a read of the interview with Andrew Ng from the Financial Times, of which I will take key snippets. (Extracts copied below are for nonprofit educational purposes under Section 107 of the Copyright Act: Fair Use and Article 5: Information Society Directive)

“Open-source software’s getting easy enough for most people to just install it and use it now. And it’s not that I’m obsessed about regulation — but if some of the regulators have their way, it’d be much harder to let open-source models like this keep up.”

“Some proposals, for instance, have reporting or even licensing requirements for LLMs. And while the big tech companies have the bandwidth to deal with complex compliance, smaller businesses just don’t.”

“When I think about the AI human extinction scenarios, when I speak with people who say they’re concerned about this, their concerns seem very vague. And no one seems to be able to articulate exactly how AI could kill us all.”

“There is also some chance that is absolutely non-zero of our radio signals causing aliens to find us and wipe us all out. But the chance is so small that we should not waste disproportionate resources to defend against that danger. And what I’m seeing is that we are spending vastly disproportionate resources against a risk that is almost zero.”

“Multiple companies are overhyping the threat narrative. For large businesses that would rather not compete with open-source, there is an incentive. For some non-profits, there is an incentive to hype up fears, hype up phantoms, and then raise funding to fight the phantoms they themselves conjured. And there are also some individuals who are definitely commanding more attention and larger speaker fees because of fear that they are helping to hype up. I think there are a few people who are sincere — mistaken but sincere — but on top of that there are significant financial incentives for one or multiple parties to hype up fear.”

“When lots of people signed [the Center for AI Safety statement] saying AI is dangerous like nuclear weapons, the media covered that. When there have been much more sensible statements — for example, Mozilla saying that open source is a great way to ensure AI safety — almost none of the media cover that.”

Andrew Ng is not alone in these views. In addition to the LAION open letter addressed to the European Parliament calling for the protection of open-source AI that was referred to previously, Mozilla has also released a Joint Statement on AI Safety and Openness signed by a varied range of companies, civil society organisations and leading individuals in the realms of computer science, engineering, journalism and policy making.

“Yes, openly available models come with risks and vulnerabilities — AI models can be abused by malicious actors or deployed by ill-equipped developers. However, we have seen time and time again that the same holds true for proprietary technologies — and that increasing public access and scrutiny makes technology safer, not more dangerous. The idea that tight and proprietary control of foundational AI models is the only path to protecting us from society-scale harm is naive at best, dangerous at worst.” - Joint Statement on AI Safety and Openness, Mozilla

“Further, history shows us that quickly rushing towards the wrong kind of regulation can lead to concentrations of power in ways that hurt competition and innovation. Open models can inform an open debate and improve policy making. If our objectives are safety, security and accountability, then openness and transparency are essential ingredients to get us there.” - Joint Statement on AI Safety and Openness, Mozilla

In addition to this, there is growing research highlighting the concerns of Big Tech influenced regulation on open-source AI development and the resultant negative impacts upon competition such as this piece by the Carnegie Endowment:

“policy measures that address only speculative superintelligence concerns (and not more evolutionary AI policy challenges) are especially likely to impose steep costs in exchange for minimal benefits.” - How Hype Over AI Superintelligence Could Lead Policy Astray, Carnegie Endowment

and the peer reviewed paper from Stanford in the George Washington Law Review: “AI Regulation Has Its Own Alignment Problem: The Technical and Institutional Feasibility of Disclosure, Registration, Licensing, and Auditing”:

“the potential of licensing to undermine competition, raise costs to consumers, enable industry capture, and gatekeep professions indicates AI licensing would create horizontal misalignment… AI licensing that places significant pre- and post-market burdens on companies may be prohibitively costly for smaller developers.”

“Licensing the development or deployment of AI thus has the potential to concentrate economic power in the hands of a few large companies… Licensing may heighten market concentration by advantaging more established incumbents who can more easily bear the licensing costs.”

“Concentration of market power could even exacerbate other harms arising from AI or undermine human values and regulatory objectives these policies aim to promote.”

“Licensing also creates tradeoffs between openness and control. Open access may provide for less control by enabling individuals with bad intentions or insufficient training to more easily access resources, but it may also increase the likelihood that critical issues with the technology are identified after release. Licensing may make it harder for users to expose harms, especially considering how openness provided mechanisms for discovering and addressing cybersecurity risks.”

“licensing regimes are particularly susceptible to capture. For example, research suggests that lobbying by physician interest groups is linked to a higher probability that a state will have occupational licensing in the healthcare industry. The potential for special interest groups to have outsized impact on AI licensing regimes is particularly worrisome given licensing may make the frontier of AI technology inaccessible to most.”

These views have not gone unnoticed. The European Union’s landmark AI Act has carve outs and exceptions for open-source AI development. Only time will tell how far these exceptions go.

Given leaked documentation from a Google engineer that warns it and ‘Open’AI could lose out to open-source technology in the ‘AI arms race’, it is clear that Big Tech is concerned with the prospects of widely available open AI and is evidently lobbying hard for rules to be crafted to their benefit as an industry.

Law Enforcement and the National Security State is Getting their Way as Well

While not the main focus of this post, it is worthwhile to note that it is not just Big Tech and the AI industry that is trying to get/ getting their way when it comes to AI regulation and policy. Law enforcement and national security agencies, leaders and policy makers are also involved in setting the agenda. There is a clear revolving door and incestuous relationship between the National Security State and the Big Tech Surveillance Capitalists, best exemplified by ‘Open’AI’s recent appointment of former NSA Director and spook in chief Paul M. Nakasone to the position of the company’s board and the ‘Safety and Security Committee’.

In the EU, civil society organizations have raised the alarm about vague terms, exceptions and carve outs that the AI Act affords to law enforcement and national security agencies allowing them to use AI and conduct biometric and facial recognition on a mass societal level. Some of these statements are linked below:

Conclusions

There are clearly risks posed by the widespread adoption of Artificial Intelligence and in no way am I diminishing the work to address these. Risks posed by AI in the here and now are varied but very real. These include but are not limited to:

  • Online political influence operations using AI generated content masquerading as genuine human created content to proliferate disinformation and push agendas.
  • Privacy concerns arising out of the usage of AI systems to sort through, link together and synthesize vast amounts of data to create detailed social scores and profiles of individuals.
  • Privacy concerns arising out of the usage of AI in the realm of surveillance, facial recognition and biometric identification and the risks of bias and abuse.
  • Concerns with Artificial Intelligence’s application in armed conflict, regarding accountability and compliance with the legal obligations of war.
  • Concerns with AI in policing and national security matters, particularly ‘predictive’ models.
  • The usage of AI agents to wage cyber wars, and attack critical infrastructure through cyber attacks.
  • The usage of AI in the commission of cyber crimes ranging from the generation of lifelike child sexual abuse material to AI enhanced phishing and hacks.

I am no expert on matters relating to AI and I do not claim to be. What I will claim to have, however, is extensive knowledge on the current surveillance society we all reside in, where we are all subject to the invisible panopticon brought about by the same mega corporations and State agencies (the Surveillance Capitalist - National Security State nexus) that is now developing AI and extensively influencing and guiding the regulation of such technologies. Forgive me if I have a large degree of skepticism towards these companies and institutions and their motivations. I think the record speaks for itself on how they behave and operate. If you think it does not I suggest taking a look at the readings under the Privacy and Security Resources section of my website.

It is my view that ethical and regulatory considerations towards AI systems are not wholly unjustified and in many cases are certainly valid. However, they can, have been and will be utilized to impose limitations designed primarily to concentrate power over such systems in the hands of those who are in the position to benefit from their usage the most. The Surveillance Capitalist - National Security State institutional nexus does not want ordinary people to be able to deploy locally run and open-source AI’s, as in such a scenario it would not be capable of extracting people’s money, data and behavioral surplus nor retain their monopolistic grip on the digital domain.

The suggested risks posed by open AI systems and proposals to establish regulatory regimes and oversight bodies complete with licencing (permission) and strict liability schemes that will crush open AI development are purported to be for AI safety and the benefit of society. But this obscures the reality. The above mentioned nexus is not to be trusted, with them pretending to serve the common good under a guise of morality as they pursue their own self interests of profit and informational power, regardless of consequences for the public. This is what I believe to be readily transparent and clear.

This is also raises another question. If Artificial Intelligence systems are indeed an existential threat and far too dangerous for them to be open sourced, democratized and available to the public at large then on what grounds is it not likewise too dangerous for such power to be consolidated in the hand of Big Tech, the Surveillance Capitalists and the National Security State apparatus?

It is my opinion that the ultimate threat posed by AI is not some doomsday extinction event but rather the consolidation of Artificial Intelligence power in the hands of an exclusive high ‘priesthood’, without true accountability and transparency. We have seen what the internet looks like today; a locked down, commodified shell of its original aspirations that places us all as economic objects in the relentless drive to extract and monetize ever more behavioral surplus, that is Surveillance Capitalism, while we are all being watched under the omniscient eye of Big Brother(s). Artificial Intelligence has the potential to up end this relationship between us and informational power centers or to forever render us as informational commodities. So which will it be? Only time will tell but that future rests upon whether or not AI power is consolidated or dispersed within society.


“The liberty of a democracy is not safe if the people tolerate the growth of private power to a point where it becomes stronger than their democratic state itself. That, in its essence, is Fascism—ownership of Government by an individual, by a group, or by any other controlling private power.” - Franklin D. Roosevelt, 32nd President of the United States of America


Disclaimer: I do not, by any means, claim to be an expert in matters relating to privacy, security and law or offer what can be construed as guaranteed fool-proof advice. What I do offer is an insight into these matters from someone who is highly invested in personal privacy/ security themselves and who is studying technology law at the level of higher education.