- Max
This is the next chapter.
- Zuzana
Welcome to Paradigms, a podcast by Leitmotiv, the independent digital policy think tank that exists to operationalize values in our digital future.
- Max
In each episode, we unpack the latest developments and debates in our digital world. Together, we try to make sense of what is happening to our digital future, who is shaping it, and what we can do to change its path.
- Zuzana
My name is Zuzana Ludvikova and I'm here to ask statistical questions.
- Max
And my name is Marc Schulze and I'm here to give simple answers. At Labmotiv, I think about the policies that shape our digital future and how to operationalize our societal values within them.
- Zuzana
And my job is to bring our thinking to your ears, eyes, and minds. In our podcast, we invite you to think with us as we unpack the building blocks of our digital world and work towards a better future that amplifies the best of our core values and our humanity. Today's topic is the antitrust trial of Meta, with the US Federal Trade Commission centered around Meta's monopoly, where in 2020... the company got accused of restricting market competition after it bought social media platforms Instagram and WhatsApp in 2012 and 2014. Though the lawsuit was repeatedly dismissed in favor of Meta, the case still proceeds. And today, during the recording of this podcast, we are following the second day of the continuing trial. Obviously, the times today are a bit different. We see direct financial collaboration of tech companies with the White House and Donald Trump. And the rhetoric of these CEOs is changing as well. All of that during the blazing AI race. In the current trial, Mark Zuckerberg has even admitted that Facebook is no longer for personal social networking, but, and I'm quoting, more of a broad discovery entertainment space. Max, what is the most surprising or interesting thing for you about this case? and what are some useful frames to think about this?
- Max
I think what becomes clear is if we unpack Meta a little bit, I think what's nice to see also in his statement is that they're finally really acknowledging that actually they are not really a tech company. Like as a software engineer, for me, there's never been a lot of technology involved in building a social network or building, you know, feeds like Instagram and things like this. Um, it's becoming more and more apparent and again, in his language also becoming more clear that they're a media company. So historically we can compare what's happening now a little bit what happened with News Corp in the United States where they were allowed to continuously buy up newspapers and TV channels and other things. And you create a media conglomerate that eventually is so big that you can really use it to steer public narrative. And for everyone who watched the TV show Succession, I think it may be plausible that there are some incentives if a single person owns a very large global media conglomerate to sometimes steer the opinions into a certain direction.
- Zuzana
Yeah, there's a Succession plan for me and for all of our senior executives.
- Max
And so from that perspective, shrinking the media conglomerates, in the case of Meta, is probably a wise move. I think it has very little to do with the overarching vertical integration of other tech companies where they combine very large infrastructure businesses in the case of Google Cloud or Azure for Microsoft or Amazon with AWS, where they become deeper and deeper integrated from an infrastructure perspective. Meta has never done that. Their strategy was always to go wider, so to acquire more channels, to acquire more media platforms. It feels like their strategy was always about how do we control the majority of streams, news, news feeds, and how do we basically become the central gateway to consuming what's happening in the world. And I think with their acquisitions, they've become really successful at this. They have bought a lot of assets and they have become a critical part of communications infrastructure with WhatsApp and Facebook Messenger and even now chat in Instagram. So from, I think from our perspective, it's nice to see that this is being broken up. And I really just hope that there's an attempt to break it up. But it shouldn't stop there because again, this is really about media and public conversations. I think we ought to have the same conversation about the real tech companies and breaking up their vertically integrated businesses.
- Zuzana
It's interesting you say that because OpenAI has also announced that they are building a social network. So there is this attempt to capture sociality in these online spaces. maybe that would be the entry to the more diversified competition?
- Max
Yeah, I'm not sure if OpenAI gets into this race, if that leads to more diversified competition. I think solution networks were one of the first disciplines in the tech ecosystem where you could observe really large network effects. So meaning that the more people use it, the even more people will come and it just can grow exponentially without doing much. So I think a lot of people are excited about getting into that space. I think probably from an OpenAI perspective, it is rather that if you look at digital, if you look at social media as a large scale factory, right? So there's people on one side who are producing free content all day, every day to get likes and clicks and interactions. And these. these workers sort of say provide that labor for free and then you have people rating that content for free by providing likes and interactions so you have a very high performing content factory where you have almost zero cost of making and rating um that content and i think if i'm in an ai company and i'm i need a lot of you know qualified high quality training data than building a social network or should we say building a content factory with free labor seems to be a very attractive proposition so i can i i think if open ai is thinking about getting into that race um i think it's rather because of that aspect rather than really building a social network to connect people with each other or to to reap the network effect uh benefits right and i think this factory that you've just described is a very interesting metaphor when you think about
- Zuzana
the ad environment and this free labor that the users basically do in their free time, just scrolling and falling into rabbit holes. Because when users use social media for free, it is really hard to prove that this alleged monopoly harms consumers through higher prices because they're doing this for free. So what can ads tell us more about the critical points that are happening? in this case?
- Max
I think ads validate the value of the factory because these factories have become, let's say, the highest paying, like most of the advertising spent volume now goes into these factories and they don't go there because the content is free or because the attention that people give is free. They go there because the spinning wheel um is working really well. So the amount of content that's being produced and the amount of attention that that amount of content generates is just very, very attractive. I mean, in advertising, you talk about eyeballs. How many eyeballs can I get and how long can I get the eyeballs to stare at something? And in TV, that used to be, you know, you have a 30-second commercial, a 45-second commercial, or you have a Super Bowl ad, you know, lots of eyeballs for a short burst. But in the social media factory, you can get eyeballs much more frequently and for much longer periods of time if you do it well or if you pay a lot of money. So I think advertising just validates that this factory is working really, really well at mining people's eyeballs and attentions. And at the same time that the input costs are really free. And with that free input cost of making content you get this tremendous amount of content that keeps this feed always moving always up to date again we are all laborers in that factory let's not forget about that and the advertisers just really appreciate the pace and the scale of
- Zuzana
that factory right and it gets really tricky if these ad systems are very personalized and just know it because in this way we sort of grew to like these ads and some people would even say that they don't mind getting an ad for a bag that they've been looking for two weeks so
- Max
Sure. I mean, again, like what I said before, there's content coming in. We validate that content through likes, comments, interactions. And that verification information, of course, also reveals our interests and what obviously the button already implies that, right? What we like. And I think the real problem, and it's something that regulation could easily fix, is not per se us expressing these opinions and preferences. But it's the fact that the company, the factory owner is allowed to sell our preferences to advertisers and allow them to bid on it. Ads will always be there. You know, a newspaper ad has been around for a really long time and it is a business model that exists in our economy. But a very simple law that says the information about preferences and the interaction data is private would solve this, would at least solve part of the problem and make these ads. add businesses a lot less profitable. And one might argue that that could hurt, that would scale down the profits of a company like Meta to a degree that they would have to break up, or they wouldn't have the money to make all these acquisitions anymore. So I think from our perspective, it will be a lot more effective to simply protect the preference information of consumers. And this way we adjust the business models and The business model and the profitability defines what these companies can and cannot do.
- Zuzana
Is there anything that users can do to sort of cut their own labor, to stop it? For example, not liking or commenting? Or is it really not effective and they have other ways to capture the data?
- Max
That's the big conundrum, right? So first of all, yes, there's other ways to capture this preference information. There's a Dutch... researcher, Bert, who has actually built a tool that makes a computer make noise. Whenever a tracker tracks your mouse and things like this on any website on the internet, I think in this example, he just used Google's tracking code. And basically when he moved the mouse, the sound wouldn't stop. So they even now track mouse movements, the way you use your browser, and those all go into this preference pool. So unfortunately, probably the advice would be to stop using the platforms altogether, but that's not really what people want to hear and it's also not feasible. So I would rather say, we live in a democratic society, take it into your hands, write a letter to your politicians and just let them adjust the business model. So the incentive for collecting that reference data goes down. But more importantly... that the preference data cannot be monetized in a way it is monetized today which leads to even more obscure ways of mining it because if let's say each preference that you make on social media is worth one cent then of course i have an incentive to collect as much preference data as i can right so if i can get a million data points on you for one cent each that's a million cents That's a lot of money already for one single user. And that just... requires that the regulator adjusts the market environment, so the business model, that you can't make money with it. That's fundamentally, I think, what this is about. And I think that would be a very effective fix, at least for the consumer sector.
- Zuzana
And all these pressures around privacy risks, they are usually tackled by regulators, as you said. And from Europe, there's a lot of fines and limitations coming to to tone down the basically US tech companies such as Meta or others. And even Trump says that they're unfair and steal from US companies. So we know that all of this is mostly coming from Europe. And so my question to you, how different are these two environments of US and Europe when they attempt to control tech companies? And what is it that they're protecting?
- Max
I think. There's one really important difference. I think when the US leaps into... So the US is generally not so much in favor of regulating, of course, but when they leap into action, they always take out a bazooka, right? So then you often get right into antitrust law, like let's break them up, let's shut them down. And Europe tries a more smaller scale iterative approach. I mean, The cookie banner is still a good example of this. It's a... It's a very small problem, tracking cookies. where we applied a lot of regulation to basically enable customers to choose not to have them. Yeah. But it's, so to say, Europe often does not take the bazooka out because it thinks like, okay, we just make these tiny fixes and it will go well. But I think from a European perspective, we need to step up and really think about business models that are behind it and adjust our regulation to basically influence. how you can make money with this stuff. I think there we need to step up. We don't need to take the bazooka out, but at least we need to be more effective with our regulation. It can be much more shorter, much more to the point if we understand the business models well. And from a US perspective, I think the same can happen because the US really believes in innovation. And innovation comes from constraints. So regulating certain business models to allow or not to allow certain aspects of it. actually creates a new constraint, which could, for example, lead a company like Meta to innovate and think about new ways how they can make money that are maybe more in line with also what consumers want and what privacy advocates stand for.
- Zuzana
Right, yeah, the cookies are very tricky because it's hard to find the good balance between not being too paternalistic, but also giving the user or internet user the choice to make their own decision. If you already think about the way cookies were sort of misused or laughed at by the tech companies or through innovation, they could sort of go around and put some dark patterns into there and they could sort of go around the whole wall, right?
- Max
Yes, correct. And it's a very technical regulation, right? You're trying to basically dictate. that a certain technology shall not be used in a certain way. And you also create innovation and the dark patterns are innovation in a way that somebody thought like, well, then we just make the accept all button very big and green. You see the regulation created a constraint. You're not allowed to do this anymore. And then people innovate. Now what we just need to put the constraint on the business model and not the technology. And that's something that we as Europe need to learn that. you know this is not a car where you can say the engine shouldn't have this much pollution which is a technical regulation but here we really have to say cars shouldn't be sold um you know cars shouldn't be given away for free and then financed by advertising running in the car right because that could lead to people being distracted on the screen having to look at all this advertising or they can't navigate to where they want to go because there is this advertising that if I say it like this, everybody's like, yeah, that. we shouldn't do it like this um but in a way when it comes to tech and and most of the stuff on the internet we do actually yeah try to regulate the engine where it's pretty clear that the engine may not be actually the problem um but the way the product is sold is maybe the problem i
- Zuzana
think you point out very well on the issues around ai and digital literacy so you said business models but I think there's also a big issue around how these things are designed or, yeah, as you said, how they are sold, but also the way they look as the product that is being sold. And I think this connects to the way that institutional decision makers, such as Trump administration or other big actors in this game who make big decisions, sort of don't understand these business models or technologies. For example, I think also the judge who is deciding for the court case of Meta has never scrolled through TikTok or Instagram, and he's sort of having domain say in this case. So my question is, should justice institutions and policymakers and regulators have the consumer experience, or is it their objective detachments that is more beneficial when they are supposed to make the decision what happens to these digital tech companies?
- Max
Yeah, that's an interesting question. I think, let's use an analogy again. Can you regulate steel making only if you've ever worked in a furnace? No, of course. And you can't know every single industry by heart or be a customer of it or having been an employee in it. I think that's very tricky as an ask. I think what we need to have though, When I say steelmaking, I think a lot of people can at least imagine the basic process of value creation. What's the product? It's the steel. The technology is the furnace. and different types of furnaces and the inputs are iron ore. So I think like most citizens can imagine this, most judges can imagine this and understand this. And when you say digital literacy, I think that's often misunderstood as something that only applies to consumers, teaching them how to use online banking or phones or whatever. I think it's really about what we really need to focus on right now is to give policymakers and judges and... and other people in this decision-making role, a framework on understanding the components of the digital world. Of course, at Leitmotiv, that was one of our first pieces of work is to create that framework of thinking and acknowledging that, for example, a digital product like a chat GPT in the AI space consists of a technology that is GPT, the actual model under it. But then there's also a business model as well as computing infrastructure, or we call it digital resources, that are sort of say the iron ore. They're the inputs and the business model dictates how you monetize the technology. And just knowing these three components enables anyone to really act on those because you're not just saying, you know, we don't like that meta is so big, but you really know like on which parameters of their business. you can act upon. And I think that's really the literacy we need right now.
- Zuzana
Yeah, you need to know the whole anatomy of the technology to truly know how it affects not only consumers, but their own operations inside the company.
- Max
Yeah, and also the incentives and how value is created. Again, you know, smelting iron ore into steel, it's very simple value proposition, and it's a very simple way to see value. I think we have really a lot of public discussions about, again, let's stick with Meta, about the advertising business model. And we have a lot of opinions about that that's not creating value. But we also don't really understand what the value is now that is coming out of these companies. And I think we need to know that. And then why do we need to know that? Because we want to steer the creation of more value. or we want to adjust how the value is coming out into society and how we benefit. And to do that, again, we need to understand, you call it the anatomy, of the business. Very simple. We will find out the answer to those questions.
- Zuzana
So how can we capture this value? How can we spot it and not just think about it becoming more and more?
- Max
Yeah, that's a... I think at the moment that's a really complicated conversation because the value posturing is driven by the companies themselves at a scale I think that's unprecedented. I think when ChatGPT came out the amount of press releases or ideas and articles of what you could do with AI, you know, all these potential ways how it could create value created basically polluted the narrative. of this opportunism, all the stuff we could potentially do. And probably a lot of these things are potentially possible. But we've never had the conversation, is that what we want? Is that good for us? And we don't have enough time because we're pushed by the tech companies to do faster and faster and faster. We don't have enough time to really think this through, have a democratic process. You know, survey citizens have very large scale. citizen engagement and really first think about what is the value that we want to see as a society and then the second question that comes after that is then how do we shape these businesses the business models the technologies in a way that we get that value out yeah
- Zuzana
i think we could also say that it is also part of their ideology of the silicon valley to see all the artists artistic industries or anything as a problem to fix right not really as a process to go through to create products, just a problem to fix that is quicker, that is more efficient, that saves money.
- Max
Yeah, I think the ideology is also that politics and governments are slow and inefficient and things like this. But I think we have to remember that democracy is thrilled for a reason because it has to bring everybody along on the journey. And it's a great equalizer also. our constitutions demand equality in front of the law and You need a participatory process and that is inherently slow. And for tech people, and again, I was part of that community for a long time, that just feels really unnecessary and slow. You know, what we learn is to push stuff out, test it quickly, and then scale, scale, scale. And that just runs completely counter to democratic process. And that's, you know, I don't have an easy solution for this. But maybe we can also leverage some digital technologies ourselves in the more policy space to basically rapidly ask people which direction this should go and basically increase the pace of democracy to match the pace of the digital economy.
- Zuzana
And just a personal question out of curiosity, how is it when you had experience being in the Silicon Valley environment of that?
- Max
scaling thinking and now in the democracy slowness type of place yes i as a very tricky question to to to answer in a politically correct way um i think it would do the um the civil society and non-profit sector well to to think about scale in a similar way because you know we see the shift to the to the right in europe and the us um and that's partly driven also by the fact that a lot of these parties are leveraging digital platforms better than other parties for example and i'm not saying that we all now need to get on tick tock but i'm i'm saying we can build uh tooling we can build um our own platforms um where you know democracy can happen in a new era which is in a digital era And we can meet people where they are already and shift them into platforms where they can participate in shaping that digital future. And I think, again, technology is always neutral. It's what we do with it that really matters. And I think instead of being afraid or being totally against any form of digital technology, the question is rather, how do we use it to do good? and I really want to see personally I really want to see the stuff that I used to work on, which is software, to be used to really maximize our well-being, to maximize how democracy works and to really be there to do good and to enable good things. And I hope that the civil society sector can also embrace that mindset a bit more.
- Zuzana
I couldn't agree more. And I think that's a great way to end our first episode. So thank you for joining me today.
- Max
Thank you for the great questions, Zuzana.