- Speaker #0
This is the next chapter.
- Speaker #1
Welcome to Paradigms, a podcast by LifeMotive, the independent digital policy think tank that exists to operationalize values in our digital future.
- Speaker #0
In each episode, we unpack the latest developments and debates in our digital world. Together, we try to make sense of what is happening to our digital future, who is shaping it, and what we can do to change its path.
- Speaker #1
My name is Zuzana Ludvikova and I'm here to ask statistical questions.
- Speaker #0
And my name is Max Schulze and I'm here to give simple answers. At Leitmotiv, I think about the policies that shape our digital future and how to operationalize our societal values within them.
- Speaker #1
And my job is to bring our thinking to your ears, eyes, and minds. In our podcast, we invite you to think with us as we unpack the building blocks of our digital world and work towards a better future that amplifies the best of our core values and our humanity. Today's theme is agentic AI and the relevance of human input. It is more than obvious that nowadays AI or machine learning emerges in different shapes and forms around us in all industries. and all societal layers, as well as all commercial, technical, non-commercial, personal and artistic avenues. We hear about AI tools and AI assistants, but more recently also AI agents. They're all different in terms of how much attention we need to devote to them to use them. For example, through prompting or correcting their answers. But at the same time, AI agents seem to be operating more independently, have bigger autonomy and may have greater power in decision making. Max, for starters, I have a simple question and I want a simple yes or no answer. Do we, a global non-profit think tank, have an AI agent?
- Speaker #0
That's already a good question to begin with. You wanted a yes or no answer. I think the answer is no, we don't have an agent. We do use AI in some of our automations though, and I'll explain that in a bit.
- Speaker #1
And why don't we have it?
- Speaker #0
That's a more complex answer. I think the great promise of AI, so the vision that is sold to us, is that these systems can learn from us and can do things autonomously. Like a simple example is always to book a flight. You can just say, I want to fly to Berlin. Can you book it for me? I think technically from... What I've seen and what's currently out there, I don't think we're that close to that yet. But of course, in terms of what companies want to portray, of course, they want us to think that we are very close to that.
- Speaker #1
And there are also all these terms in the AI market, which sometimes I'm also confused about. AI tools, AI assistance. So when does an AI assistant stop being an AI tool? And when does it start being an... And. AI agent?
- Speaker #0
Yeah, I think so using our school of thought for this a bit, I think first of all, what is the fundamental technology here? The fundamental technology is deep learning and that can be used to train different type of models. Some of these models are text focused and we call them large language models. There's also now multimodal models that can generate images and videos. And they are trained on data using the technique called deep learning. And then you get the outcome as a piece of software, essentially, that you can prompt. So that's the fundamental technology. Now, the first product, the first digital product that was made with this technology was the biggest PR stunt, was an AI assistant. that was ChatGPT. And that's, of course, it is a way essentially to experiment with the technology. It allows us to experience the features and the power of the technology, but it doesn't actually solve a problem if you think about it, right? It is not the cost of providing that chat experience far outweighs the value it creates for us. Yes, it can generate a social media post, for example. But is that worth, let's say, two, five, six thousand dollars a month in a subscription? Probably not. And you see that also in the financials of a company like OpenAI that is still making heavy, heavy, heavy losses to basically finance us to have that experience of playing with the technology. And that's very important to them because they want to convince us that this technology has great potential. So it's a little bit like getting a car to drive for free to test it out.
- Speaker #1
It's another training program designed to teach you one thing.
- Speaker #0
Now, to your question, where does the tool come in? I think looking at the technology, everybody understands that the cost of running the technology is very high. And so in order to justify these costs, you need to find very high value problems now that you can solve with it. So instead of saying this assistant can generate a social media post for you, what you want to say is this tool can run your marketing campaigns for you. So the tooling language comes when you basically say, well, this is not just a chat. It is something that can basically do processes for you and can do higher value things. And then if you take that even further, of course, the idea of an agent is a bit of a virtual employee, is that this thing, this product that I'm using can do things fully autonomously. And that, of course, in the perception of the customer, has the highest possible value. And I know that OpenAI floated prices for this that were, I think, in the range of $20,000 a month. And I think only at that agent level of product will these companies actually be able to make a profit. And that's why they're really hunting down that narrative right now to say, yes, we can build agents that can do things autonomously. And that's very valuable to you. And this will be essentially our business model. They're hunting for a value proposition that can make enough money to finance the cost of developing and operating the technology and the products.
- Speaker #1
Right. So we're not there yet. We're still, you say, at the stage where they're sort of making us play with it or inspiring us to play with it to make us dependent on it eventually.
- Speaker #0
Yes and no. I think there's two very important playbooks in the tech industry. The first one is the idea of the lean startup. And the other one is always a very important sentence, is fake it until you make it. And the idea is that you should always first have customers before you build a product. And if you have to kind of... Say that you can do something that you can't actually do, but you're convinced that later you'll be able to build it, then that's okay. Just go out with that. That's the fake it until you make it philosophy. And right now, what these companies are doing is, yes, they're letting the free chat service run to get us hooked and get us impressed. And they want us to think about all these amazing things we could do with this if it were more autonomous. And then they say... Here you go. We can now do it autonomously. All we need you to do is tell us how much you would be willing to pay for this. And I think at the moment, we're still in this phase that these things can't actually do agent level tasks or agent level functionality, but they're dangling it in front of us to test if we would be willing to pay a higher amount of subscription for that, essentially.
- Speaker #1
Right. and on the level of
- Speaker #0
companies if they were to have these virtual employees or ai agents um how expensive would it be do you think to maintain them to have them in the company um i yeah i doubt that these systems and also from the output that i've seen that they are really employee like i think for a very long time we will still stay in the augmentation idea that basically a company buys a product that That may be an agent-based AI, and that makes the work of an employee easier. So it's like giving everybody a junior employee that they can work with. But I fundamentally don't believe that if you really want to build a high-value company and a high-value organization, that these agents will perform work at the level that you probably wanted to be for a very, very long time.
- Speaker #1
So if we imagine maybe a dystopian scenario where AI agents speak to AI agents and sort of manage to circumvent a human who could otherwise keep tabs on all of that. Is this too far of a future, do you think?
- Speaker #0
I think that's already like the systems being able to talk to each other has been around for a long time. I think AI can potentially write the code that you would need for it to talk to something else by itself. But for all these things to be realistic and a real danger, this thing would have to be conscious and would be. have to be able to evolve and that would really mean ai which is a very misused term right now at the moment even though of course there's a lot of people that want us to believe that these models are not artificial intelligence they are simply um they're models trained on something um and yes they can give you funny answers or like can give you answers that make you think this thing is conscious but it's just outputting words and images based on the training data. I have not seen a model that basically every week gets better and evolves and is conscious and basically runs by itself. At the moment, most AI models run when you prompt them. They don't run autonomously and think by themselves for hours and go like, well, maybe I should go talk to that other agent right now. That's not where we are. and we're I still think very, very far away from that place.
- Speaker #1
Yeah. And this would also mean they would have to run all the time, right? And use up a lot of energy.
- Speaker #0
Yes. The process that now is mostly responsible for a lot of energy use is the training phase. And there a model is running the whole time, but for a specific purpose, which is to train it. Afterwards, the cost of keeping something running all the time. is prohibitive, I would say, at the moment. Though I did see an announcement from Google that they want to offer this inside Google Cloud, that you have a model that is basically continuously looking at data that's coming in, for example, like emails or even other data that comes in from a camera or a continuous stream. And essentially... continuously prompting it. But what that means is basically every 30 seconds, the model just looks at the new data with the same prompt and is trying to decode or give an answer. Again, it's not running in a sense like how your brain is on, constantly processing your information, but then having also this sub-conversation that happens in your head, the thinking part. None of these things, at least... as far as we know today, actually happen. And then just work like hell. I mean,
- Speaker #1
you just have to put in, you know, 80 hour,
- Speaker #0
80 to 100 hour weeks every week.
- Speaker #1
Interesting. And to bring another case, also from the nonprofit sector that we are in, recently, nonprofit volunteer-based Wikipedia has integrated an AI strategy into their mode of operation. And on the 30th of May, they announced in their press release that AI does not replace Wikipedia's human created knowledge, but helps editors by automating certain processes and tedious tasks. And not an AI agent, but a similar kind of help is also something we have in our Light Motive workspace as automations that you created. And they save us time and let us work on more things simultaneously since our team is quite small. But how do you look at all of that happening within the nonprofit sector?
- Speaker #0
I think it's fundamentally a good thing because it also shows something that I think is really important. So you have models, AI models, and you have providers that host them for you and you can prompt them. But as you just pointed out, the real value is created by a human being that may be a software developer or someone less technical kind of gluing. the capabilities of a model into a process and automating things and i think that's a beautiful thing because it can really remove some repetitive tasks um especially when when maybe quality is is not um it's not so important like what a good example may be that uh you have um you have a massive amount of emails coming in and you just want to flag the ones that are maybe the most important maybe 20 percent get flagged wrong um you answer them anyways but maybe slower to me that that's not a problem but it shows you that probably for now and for the next maybe five to ten years the real value will be in gluing these models into processes and then from an environmental and sustainability perspective i would say let's just make sure that we glue them in there in a way first of all that we can easily replace them with something that may be coming to the market that is running on more sustainable infrastructure or that we continuously monitor the energy that it really costs so that we can really weigh that yes this automation saves us time but at what cost right that's that's something that when i automate things for us i always try to think about um okay this maybe saves four hours but if it uses the equivalent of uh 10 000 liters of gasoline then that's not worth it And that's really, I think, you know, another good example of what I mentioned before is, yes, you know, you could use an AI assistant to make social media posts. But honestly, for us, for example, our quality requirement is so high that we would end up editing it anyway. So we now spend then, let's say, 100 liters of gasoline to make a draft that then takes another three hours to edit. well then we might as well just spend three hours writing it to begin with you know and sensible automation is very important i think with ai because you can do a lot but not all of it really makes sense and really creates value in relation to the the real cost of of doing it as
- Speaker #1
the world gets bigger yes it's nice that in our case it's not only that we use ai for sustainability but also we make sure that we use AI sustainably.
- Speaker #0
Yeah. And I think that's also our sometimes our role is to be a role model. You know, something that's very special about us is that we are not so afraid of technology and we really deploy a lot of technology in our work. But we show that you can use open source models, you can run them. For example, we have a way to run our models that always runs them in in a country where there's a lot of renewable energy available. Most of our other infrastructure runs on refurbished hardware in a regional data center that's run by a small family actually. And technology can be used for good things. And they can really be used to, to also remove like really work that's, that's sort of say not worth the human, but you have to be really thoughtful about how you do it. In technology philosophy, it's always technology is neither good or bad. it's
- Speaker #1
it's what you do with it that makes it good or bad yes exactly but uh this is for the non-profit work settings that um we talked about but is it possible to integrate ai ethically or responsibly even in commercial settings for example if we stay close to these predetermined values or principles or is it is it really feasible right now or do you think it do you see it happening and what would these values or principles be in your view
- Speaker #0
That is a big, big one. I think in the commercial context, you have to be really careful because I think there's a very big risk of using, let's not use agents, tools and assistance, but let's just say automation as consumption induction. So you could just give an example. You could use a model that generates. thousands of videos of ads at a fraction of the cost and a fraction of the time it takes and just flood the internet with like millions of posts about your product, right? Essentially with the goal to induce more consumption. Is that unethical? No, marketing ads have been around for a long time. It's just you increase the efficiency of marketing by orders of magnitude. So there we have to be careful. I do think that a lot of companies have work, have a lot of employees that are often heavily overworked with work that doesn't need to be done. You know, think of time recording, uploading expenses, putting their vacations into a calendar, scheduling meetings between each other. I think there is value there in automation that actually though might not need AI at all. It just requires. well-thought software and automation that could free people from stressful, meaningless work so that they can be really focused on creating value. I think that's a good, a good, a positive example. And the principles I would take from that is to focus our efforts on automation of things that create value for the employees, for the humans and for the company and refrain from using it very heavily in driving increased consumption or in using it heavily in marketing. I think that gets us on the wrong track. And I would also caution that if that were to happen at scale, I would think that regulation and policymakers relatively quickly would put an end to it. Similar to political influence, where we've seen it immediately being used, of course. and relatively quickly a movement then to constrain that. So I think it's not worth even going there as a company, just immediately stay away from it and really focus on how can we create more value for the humans in our organization, for the employees, and for us as a company.
- Speaker #1
This is very interesting because you mentioned regulation and sort of maintaining the human value within all these processes. this, but... On a different note, to sort of address the moral panics around AI, for decades there has been discourse around technology which would become too good and it would replace human jobs. And some people say in practice it's already been happening, for example, with delivery services using robots or drones instead of human gig workers to save money or escape loss, or in supermarkets where there are self-checkout cash registers which speed up shopping processes. So with... AI or these AI tools, AI agents replace human work or make it more efficient or both or sort of make the human value bigger? Or what are the largest implications and opportunities for regulation here?
- Speaker #0
Yeah, I would give a bit of a cautious answer because I think we don't really know. You know, the same was true with the steam engine. The same was true with large-scale electricity grids. It led to more industrialization, which also created jobs and destroyed jobs. But honestly, I don't really know. But what I would caution and probably like to see from regulation is that we have a very, very good monitoring of this situation so that we can steer appropriately and protect citizens from potentially outcomes that... we don't want to see and that don't improve the quality of life, that don't improve the human experience and don't create value for humanity. I think regulators really need to step up and have sophisticated monitoring processes in place that we can really see the level of AI deployment. So how many companies are using it for what? And then a very clear mechanism to monitor and observe. Has this displaced jobs? If so, what kind of jobs? How many in what industries? So that we are capable of course correcting and that it's not like something that happens, citizens feel it, everybody sees it, but regulation doesn't have enough data and doesn't act because of the lack of data. So I would right now invest in the capability to monitor and track what's going on in a very sophisticated way.
- Speaker #1
Right, so instead of rushing towards deployment of AI and saying it's inevitable, you'd rather recommend or see a good way to solve it as to stop and take a moment and focus to monitor what's around, what's happening exactly.
- Speaker #0
Yeah, exactly. I'm a big believer in order liberalism, which is a philosophy in the political economy that... It's really about setting the right framing condition for a market and having a lot of data on how that market is evolving. And, you know, with AI, it's brand new. And we have an opportunity now to set up the infrastructure, the institutions to just have a really good view into that market. You know, we didn't have that for data centers. And now we have data centers growing all over the world. And everybody's like, oh, we didn't see that one coming. We had the same for fiber optics or for telecommunications that we still have very little insight of, you know, how much traffic there is from which country to another country and what is in the traffic and what's creating it. And I think with AI, it's an opportunity to learn from that and say, we need to have a strong grip on the societal effects of this. And we need to be able to track that very closely so that within a week or a month or a year, we could course correct very quickly if needed. So this way you can, you know, let the EU and all the other bodies that are really pushing it, they can put more gasoline on the fire. but at least we see what's burning inside the fire while they're accelerating the burn.
- Speaker #1
Yes, and we have maybe more control over the fire.
- Speaker #0
Yeah, and me as a citizen, I would feel much more comfortable if I would know that the government is monitoring this very closely, that there's public dashboards and data regularly updated. I would feel more comfortable and safe. if I know that they are fully capable and able to act if acting becomes necessary.
- Speaker #1
Yes. Thank you very much, Max, for talking to me today.
- Speaker #0
Thank you for asking wonderful questions.