(shout out also to an excellent conversation between Meredith Whitaker and Camille Francois on this: The AI series: AI and Surveillance Capitalism | Studio B: Unscripted (youtube.com))

Ariel Ezrachi, Chair: Could you share with us your views about the competition dynamics (or lack thereof) in the AI industry at this point?

There are two dimensions of concern around AI:

  • AI RISK / SAFETY issues (discrimination, bias, inequality, disinformation, toxic content, injuring privacy, massive violation of human rights online, election interference, environmental harm…)
  • MARKET POWER issues that create CRITICAL DEPENDENCIES OF SMALL PLAYERS ON LARGE PLAYERS. 

Both dimensions are already centrestage in digital, but are confronting us on steroid with AI.

First a bit of context. Everyone is in a fever dream about AI – which is deliberately being sold to us with ENORMOUS HYPE as the most transformative technology the world has EVER SEEN –hyperbolic narratives that go from existential extinction risk to something that will enormously increase productivity and deliver us from all sorts of work. HOW DO WE KNOW THIS IS A PIVOTAL MOMENT?

  • Current models are deployed as vast advertising campaigns – ChatGPT was nowhere till Microsoft started piling $$$ into it and decided to use this text generator with tenuous grasp on reality as an advert that pushed millions around the globe to experiment with it & adopt it.
  • Pushed hard by financial investors who need to recover the losses on the metaverse, on web3, on crypto, need this to be seen as a transformative/pivotal moment – Silicon Valley and Wall Street need these hype cycles to sustain hyped up valuations for their next IPO and  acquisition.  
  • Tech giants themselves want rising profits and valuations. “Is this safe? Is this ready for being launched onto the world? Is it connected to the truth?”  A community of scholars have been asking these questions but at some point this was just deployed onto the public regardless – not because anyone was reviewing the scholarship – but because of the imperative of reporting ever increasing profits/ growth.

Plus, the notion that this new superclever thing is going to find escape velocity, become sentient and dominate the world is irresistible to a lot of powerful men who love the narrative we need to fasten our seatbelt because they either control that powerful AI or we are going to be eaten by it.

THE REALITY is all of this hype is obfuscation and distraction. AI as a term has been around for a few decades, then around 2012 there was a realization that techniques developed in the late 80s combined with masses of data and computing power could do something quite impressive.  There was a recognition that one could do more than selling advertisers access to your demographic profile – can use the same resources to train AI models, to expand into new markets. At the time there was an initial discussion by AI ethicists employed by companies such as Google about “what are we doing here, this stuff has the potential for not being pretty (discrimination, bias etc)!” but they were then fired because they were asking inconvenient questions.

We need a POLITICAL ECONOMY VIEW, understanding the business models, who wins and who loses, and then look at who is telling us this is a pivotal moment.

When you do that, the economic reality is there are key assets (compute, data, GPUs) that are concentrated in the very hands of very few companies in the US and China – those same tech giants that are the focus of concern and antitrust enforcement for their power and conduct already.  Players who are at the core of our concerns about SURVEILLANCE ECONOMY. The very players who have accumulated OUR data by appropriating and cheating on data protection laws are now using their lakes of data to gain an advantage in this technology.   So we have these powerful technologies controlled by a handful of companies who will always put those objective functions first.  Their narrative is you need this, we hyperscalers are the only ones with the assets in the West outside China, suck it up. But this means the inevitability of concentration. That means entry is hard and monopoly is persistent.  AND THIS ALSO MEANS THE DIRECTION OF INNOVATION IS DRIVEN BY THEM! This cannot possibly be a narrative that goes down well with regulators.

THEN we see the same Big Tech giants doing deals with independent AI companies – they say this is all benign, supportive, collaborative, smile on everyone’s face, but in reality these “deals” further reduce the prospect of truly independent challenges – what are the terms of these investments, what are the rights associated with them, is it conceivable that a 10bn investment comes with no strings attached? Is it a merger in all but name, what are the prospect of independent disruption? OPEN AI, Anthropic, and now Mistral just announced an “investment” by Microsoft. This has created huge waves even though the investment is small – because Mistral was a main actor during the AI Act approval process arguing we need no regulation of Gen AI because we European challengers such as them need a chance to fight the tech giants… so much for that.

So we have a super concentrated upper layer of very few Usual Suspects also getting its claws into smaller players reducing opportunity for disruption.

Then yes, you have potentially a competitive productization layer downstream but as we know from other industries competition downstream which depends on an input does not disperse concentrated power upstream – in fact there is the potential for self preferencing, foreclosure, exclusion etc.  This is not a solution.

SO:  the market power problem is one of STRUCTURAL massive advantages arising from ownership of scarce resources, and of PERSISTENCE and GRANDFATHERING OF POWER TIS IS NOT SELF-CORRECTING. It is getting worse in so many dimensions.  Take NEWS.  In a world in which publishing and news are being decimated by platforms using news content and paying nothing, we have AI models crawling publishers’ high quality content (which are very valuable as ordinary internet slime deteriorates quality) – PLUS Google just offered a new tool to “independent publishers” which uses AI to tweak EXISTING CONTENT so they can reproduce it and publish it in exchange for receiving analytics and feedback, they get “a monthly stipend amounting to a five-figure sum annually”,  without paying anything to the original content source… this is surreal. “Strong-arming cash-strapped small publishers into polluting the information ecosystem and calling it “training””. Google is incentivizing the production of AI-generated slop.

We have admired the problem of surveillance business models and growing market power in Big Tech for decades telling ourselves this would be self correcting, it feels like Groundhog Day.

Ariel Ezrachi: What are your views about [1] the effectiveness of competition law? And [2]  the adequacy of current regulations – DMA, AI Act,…?

I mentioned at the start we have a host of AI Safety issues, which are distinct from the AI market power issue.

The AI Safety issues are exacerbated by this “bigger is better” paradigm in which we have been in since 2012, where we rely on data which reflects a past and present that are discriminatory, racist and mysoginist, so of course that data is going to be reflected in the output of the system – As latest example see the drama around Google’s Gemini image generation with all the problems of bias and discrimination that it brought up. But who is surprised? It is just showing it’s not the case that with bigger models and bigger data we are going to head roughly in the right direction of getting better at tackling these – we have research that says this is not what is happening – Abiba Birhane and colleagues just published a paper showing w bigger models and more data those racial biases get worse, you get more of them – we have serious issues of how we tackle and mitigate the social impact of those models because we are scaling them too fast – we are not catching up with the problem we have

Yet we are trapped in a sort of MAGICAL THINKING – as put by Meredith Whitaker, “we have a little bit of trash which makes up a trashy model, let’s pour more trash on top and we will clean it up” – some of these problems are just untractable at this point – we cannot create a dataset that’s unbiased.

The fundamental issue is that the incentives driving the tech industry are not geared towards social benefits. And yes to deal with this as late as we are we have the AI Act and the Biden’s Executive Order and in the UK Government’s White Papers promising to focus on AI Risks. We will see.

But none of this conditions on market power, and is not useful for that.

For the market power problem, I go back to my admiring the problem and Groundhog day:

  • Competition law which is ex post and conditions on a finding of dominance and anticompetitive conduct is not REMOTELY an answer. We are talking about “right now”. We hear regulators saying “the window of opportunity is closing”…. Yes, let’s keep admiring the problem.
  • DMA and DMCC are struggling enough as it is to deal with gatekeepers, the thought they could be used to deal with AI is delusional.
  • What can be done instead:
    • Immediately and directly address discrimination in any form with common carrier-type rules – no self preferencing, no preferencing etc.
    • Break up or hold separate some of the key components of the stack to address incentives:
      • Cloud
      • Data which are ours ad collectively could be designated a common
      • Prohibit tie-ups and “investments” by Big Tech
      • Encourage government investments, with caveats.

Ariel Ezrachi: Is there anything else government could do to address the concerns?

The job of antitrust agencies is to make sure these companies do not grow impossibly large. The function of antitrust is to check economic power. However, antitrust and regulation will not go far enough even if aggressive. And given the record they will not be aggressive enough.

So the complement to regulation is there needs to be – especially in Europe – a REAL INDUSTRIAL POLICY of PUBLIC INVESTMENTS IN THESE CAPABILITIES. From universities to private companies. 

The US government established the National AI Research Resource (NAIRR) last month, a pilot project led by the US National Science Foundation working with 10 other federal agencies and 25 civil society groups. In theory it is intended to facilitate government-funded data and compute to help the research and education community build and understand AI.  Will it work?

First, a key requirement must be that these public/private partnerships do not just in the end funnel public funds back to cloud companies (not a crazy allegation, the idea for NAIRR originally emanated from within the national security establishment, which has historically been in a close relationship with corporate monopolies). The initial idea was that tech giants would receive public money to licence their assets to the beneficiaries, this clearly is not the spirit as public money would then be finding its way to Big Tech again.  It now appears to be geared more towards assets being “donated” do the beneficiaries. Remains to be seen.

Second, it is important to be clear about the direction of travel:  so far, we haven’t asked enough of AI firms to show us their homework, instead permitting them to coast on shallow assertions that AI will inevitably lead us down the path of technological innovation. Political representatives have largely accepted breezy associations between AI and innovation without feeling pressed to be concrete about what those innovations are and who they’ll serve. So far it largely seems like business as usual—the same firms that brought us the surveillance business model and toxic social media platforms are driving the trajectory for artificial intelligence.

WE NEED INVESTMENT IN SHARED PUBLIC DIGITAL INFRASTRUCTURE, A LOT AND URGENTLY – an INDUSTRIAL POLICY to have any hope of reshaping digital markets – and democratised supercomputer access also helping with the creation of “AI factories,” where small businesses pool their resources to develop new cutting-edge models. 

Again, we need to be careful of public/private partnerships: state funds must be diverted away from Big Tech, even if they are for projects with a public function, and we need to be more explicit about the benefit.

Ariel Ezrachi: how worried or optimistic are you?

How is this going to end well? We have these powerful technologies controlled by a handful of companies who will always put their profit/growth objective functions first. Can we find the right balance of REGULATION that miraculously starts to work in the short term, AND PUBLIC INVESTMENTS?

Hard to be optimistic. Surveillance actors have a close relationship and a lot of cosy partnerships with the US and other governments. Monetizing surveillance is the economic engine of the tech industry.  The solution space does not go far enough. Yes we have laws like the UK Online Safety act. But they take as a given these massive social platforms and then the solution often looks like extending surveillance and control to government, government chosen NGOs or actors who will have a hand in deciding how we deal with it, but it does not answer the question of how do we attack the surveillance business model that is at the heart of this engine.  So, no, I am not optimistic… 

  •  

Trending