- Rufus Grig
Hello and welcome to The Learning Curve, the podcast from Curve that delves into the latest developments in information technology and explores how organisations can put them to work for the good of their people, their customers, society and the planet. My name is Rufus Grigg and in this series, with the help of some very special guests, we've been looking at all things generative AI. We've covered a lot of ground over the previous five episodes, from first principles of AI and machine learning to the ethics and sustainability implications of doing AI responsibly. And now to wrap up this series, we're going to round off with a look at the current hottest topic of agentic AI or AI agents, and then finish off with a few of your questions. And I'm absolutely delighted to say that I'm joined yet again by my partner in crime from the first couple of episodes, none other than the force of nature that is Will Dorrington himself, CTO of our transformation practice, Curve Digital. Will, welcome back.
- Will Dorrington
Nice to be back, Rufus. Really enjoy these. Looking forward to this. It's such a hot topic. It's getting hotter. as the weeks go on. So yes, starting to crack this door open on agentic AI will be fantastic.
- Rufus Grig
Brilliant. Okay. I'm looking forward to it too. And I think we better start with a question that says, what is agentic AI? You did name drop it back in episode two, and it's so hot. We hear people talk about AI agents, agentic AI. Go on, demystify, tell us what it is.
- Will Dorrington
So agentic AI is really the next step beyond generative AI. While generative AI It creates content in response to prompts. You know, we all love and use ChatGPT. I'm sure you can put in a prompt to write you a poem about Rufus and his musical hobbies and it would generate one. This one actually works slightly differently. Agentic AI adds autonomy. So it can plan, it can decide, and it can act to achieve goals. So think of it this way. Generative AI helps when you ask it to, you know, write some coding or draft a report. Agentic AI works proactively. It breaks down tasks into steps, gathers all that relative information and input, then makes decisions and iterates until it reaches your desired outcome, whatever that may be, all with really minimal input.
- Rufus Grig
Okay, so is it too simplistic to say that ChatGPT is one shot? I'm going to ask a question, it could be quite a complex question, and I get a response. But a Gentik AI is likely to have multiple steps in the process. the subsequent steps after the first one might be different depending on the scenarios.
- Will Dorrington
Yeah, it's absolutely more iterative in the way that it works. So it's more of an evolution. It still uses the same foundational models as generative AI and it can, in theory, use the same sort of approach in regards to prompting, but it applies them in a goal-driven adaptive way. And adapt is really key. So that shifts us from tools that react to prompts or multiple-shot prompts, et cetera. to actually those intelligent agents that solve those problems autonomously in both digital and physical world. So it can reason on its own and reason is a key word there.
- Rufus Grig
Okay. So before we get into some examples, I just want to make sure there are three words that are incredibly similar that we're using but aren't quite the same, and that's agentic, agency, and agent. They're all almost the same. Let's start with agentic. What does agentic mean?
- Will Dorrington
I don't always look at it in that manner as in breaking it down in that sort of clarity. What I look at is an agent is a system that acts autonomously to achieve a goal, right? So taking actions on behalf of a user or a system. That's an agent. That's the one that's actually achieving a goal and going forward. So terms like when you, I think you mentioned sort of AI agent, or if you look at software agent, all that is, is a pure clarity around, we're talking about tech rather than someone at the end of a phone call on a CCaaS system. The term agentic describes the quality of action. acting proactively on something. So that's the agentic nature, but the agent's actually carrying it out. And I think what helps expand that even more is, you know, agents will always operate in a loop, you know, this perceive, reason, act, iterate. So gathering those inputs and signals, whatever they may be, planning and deciding those actions, then executing those tasks and then readjusting.
- Rufus Grig
All right, brilliant. Thank you. That's really cleared that up. So let's have a think about some real life examples. And then perhaps we can unpack what's really going on under the cover there. You know, I always find talking about something in real life helps take it from the abstract into something I can really, really comprehend.
- Will Dorrington
Sure. Obviously, we discussed this offline before we go on to these podcasts for a few minutes when we remember we're doing them. I remember one that stuck in my mind was, oh, it can help you complete boring IT tasks. And I'm like, yeah, anything boring is top of my list for me. So think about the way that we have to monitor and run system checks to see if there's some security patching. You can instruct an agent to say, hey, proactively monitor this system how often as you like. Then actually check what's missing, install the updates. And then once you've actually done that. then go back and tell us what you've done. So give us a report. But you're not having to get involved there. What you may have to get involved in, and when human in a loop comes into it, is when something fails, and it fails again, and it fails again, and you're not sure why. Another great example is booking some travel, right? So the agent can pull your calendar, book your flights, your hotels, your rental cars, your reservation at the bar you like, based on your preferences and the delegation of authority you give it over your bank account to spending limits. And then send you that complete itinerary so it can work as your personal assistant there as well, which is fantastic. I mean, it's going to be one of the first things I create for sure.
- Rufus Grig
The agent that's monitoring a critical system, that seems to me like a development of something that we've had in IT for quite a long time. You've got a system, I don't know, maybe it's an air conditioning system or whatever it is. I've got tools that are monitoring it. And what will probably happen if it detects something's not going right. is it all you know in the old days a klaxon would sound more likely a little a dashboard light changes we're going to know what you're read on it yeah yeah and somebody sitting there monitoring it then does something about it so that's that's the old world absolutely and the new world is the the klaxon effectively informs an ai agent that can then understand what it might need to do to resolve the situation And I only sound the klaxon to the human if the agent fails to be able to resolve it automatically.
- Will Dorrington
That's exactly it. So actually, even in the new world, before we started getting to the agent's approach, we would maybe actually run an error code for a large language model to understand what the error code means in plain language. But I would still go to, you know, someone within IT, someone like yourself or I to go, OK, yep, we know what that's missing. We need to update that. But it's having issues unpacking the packaging or whatever it may be. But now it's going, that's the issue. I've now been given the agentic approach to go off and actually execute this myself up to a certain limit. So it will go and then execute at your spot on.
- Rufus Grig
Okay. All right. Let's move on to your other example, which was more personal, the booking of travel. Because, I mean, travel, it's really complicated. I've got to get somewhere. I know I need to be at this place in this country at this time. And you were working backwards from that. Do I need to get there the day before? You know, do I need a hotel if that's the case? You know, how long is it going to take me to get from my house to there? What if I'm not in my house at the time I'd need to set off? There's multiple legs of decision making. I'm then looking at differences in cost. I'm looking at preferences around sustainability. That's really, really complex. You know, it taxes my brain trying to book something quite similar. Is AI really up to the job of managing those multiple steps?
- Will Dorrington
Do you know what? The answer is yes and no, isn't it? Because it's only as good as the information it can sort of reach. So once you've given it the fact that you want it to go off and book this holiday for you, it will go through elements of reason. So if it already knows kind of what your movements are, when you want to get there, and your preferences around location, you know, and everything else that comes up. I mean, I'm really fussy when it comes to hotels. I'm off on TripAdvisor. I'm Google reviewing. I'm looking at all those star ratings. So as long as you can sort of say, this is the quality I want. then it can still go through that reason approach. So it's not just one shot and it's gone. It can come back to you and go, look, I found this. Are these okay? I'm worried about the fact that this is a four star or a three star, doesn't have a pool, doesn't have a bar. So it can still give you the option if you design it so to take further input before it goes off and executes. But it will get to a point where if you feed it enough information appropriately, that human in the loop gets minimized because your confidence goes up in it, because all of a sudden it's getting better answers, it's getting better responses. And in this case, it's booking better holidays for you, which would be a lovely outcome.
- Rufus Grig
And that's really interesting, that feedback. Because we're talking about an agent that is built on top of AI systems, but we're training it. Indeed. Training the agent in the same way that we'd be training the models that the agent is relying on in order to say, yes, I like this, or I don't like those sorts of trains. Or no, what do you mean I'm getting up at five o'clock in the morning? You know, those sorts of things to help it understand our preferences.
- Will Dorrington
Yeah, you're doing your own retrieval augmentation, but that is on your own sort of your personal wants and likes, which is quite a cool thing in this scenario.
- Rufus Grig
Yeah, really interesting. I've skipped over the part where you said I was going to give it access to my bank account. And I'm assuming somewhere it's possible to put some sort of block in that says, do I want to approve or maybe does my company's travel policy want to approve this bit of travel?
- Will Dorrington
Absolutely. But we get scared about these sort of things. It sounds worrying because it's a new context where we're approaching it from a point of AI. But we've always given access to our banks for direct debits. And if you have multiple accounts, you have the ability to transact, you have to connect them every 30 days, etc. It's very normal. It's only when we say, hey, it's going to be AI doing this that people freak out. But it will still be in a controlled, here's your spending limits, here's your delegation of authority, whatever it may be. So as long as you've set it up appropriately and responsibly. then it should be okay. We've seen loads of stuff happening on the news from different bots and even chatbots. It's an implementation nightmare waiting to happen. And yeah, I would say I'm looking forward to seeing it, but I'm not. I hope everyone has success.
- Rufus Grig
I'm looking forward to seeing it in theory before I have to rely on it in practice, I think is probably what I'd say. And I guess interesting as this stuff is for travel, just thinking about the business implications, there are business decisions that we make all day, every day, whether they're... about deploying individual people, sort of service engineers to a certain task, whether it's investing in a new program, you know, everything from the tiny to the huge, are we going to buy this company? What sort of role can agentic AI play in assisting human decision makers or even taking some of those decisions away? What sort of role could it play and how close would we be to those sorts of things being implemented?
- Will Dorrington
I think it's so it all comes down to the actual use case, right? So if we're doing from a business decision point of view, like managing finances, optimizing, you know, maybe your schedule for your workforce, handling customer service complaints, I think it's we're already starting to see leaps forward there. And that's where we're going to start seeing the first agent. So you know, the fact that it can analyze the data, it can compare options, or it can look at policies, which may be the options around what's best to apply based on that business needs that business regulation policy and constraints. and then recommend moving forward the best choice whilst generating necessary documents or even decision outcomes. I think that's probably one of the best things it's really good at. And what I like about that more is it's an internal facing agent. So the risk is less because the human loop's always present. You're not pushing it out towards a customer. You should still engage your brain. We spoke about this before. Always look at what AI presents back to you and still be the SME on it. If you're not the SME on it, make sure you loop someone in who is. But, you know, I think exactly what I said, finances, scheduling, etc. It's going to be fantastic for making those business decisions.
- Rufus Grig
Okay. So these are great examples of what we could do. Let's say you and I have formed our subdivision of curve where we're going to do marvelous things and we want an agent to do something for us. How do we build it? What's the blank sheet of paper? How do we start?
- Will Dorrington
I'm more excited about this subdivision of marvelous things we're going to do. But, hey, let's answer that. So. I see it in three ways now. Okay, so if you're building an agent, it can be done by a range of people. So that's really important to know. So it doesn't just need to be your technology wizards, but occasionally you do need them. So let's actually jump in, not on the democratized side, but the AI expert side. So for complex agents, you know, that need much more specific machine learning or tailored sort of custom AI models, then of course, you're going to need specialists in AI and, you know, those data science coming in. But as we've seen, there are a lot of no-code tools. So many platforms now actually make building agents incredibly accessible to non-developers. Anyone can really do it through those sort of drag and drop, what you see is what you get sort of interfaces. So in the near future, and I'd say arguably now, arguably not, more people without technical skills will be able to build agents. And that's for like even complex things like customer service up to sort of more advanced automation. But I would say at the moment, everything's in between those two points. And what I mean by that sort of arguably now and arguably not is most cases fall in between those AI experts and then those sort of citizen developer makers, you know, developers without that technical knowledge, working with those with the technical knowledge to help integrate the more advanced and specialist needs that they may have.
- Rufus Grig
And what sort of tools would they be using? I mean, what other things off the shelf is it? And feel free to name products. This isn't the BBC, you know, we can, not yet. I'm sure the BBC will be scheduling this soon.
- Will Dorrington
I always try and keep it agnostic because one thing we're seeing actually is we're getting AI coming at us from all directions. But you've got platforms such as Microsoft Agents, you know, so your co-pilot studio gone to Agents. And then obviously Google Dialogflow is quite a common one that's used. And then if you look at the more complex, the more advanced, you've got like the Azure Bot Framework. And you've got Google, is it Google TensorFlow, I think is the other option you have there for those more complex needs. But, you know, both of those, you're still going to need that middle ground. where you have the specialists, the SMEs, and then subject matter experts of what you're building that for. So actually, we talk a lot around what skills do you need for this, but basic problem solving, but an understanding of the business is always going to be one of the most crucial needs of the business case, of the usage, of the process, whatever it is you're trying to automate or create an agent for.
- Rufus Grig
So the business analyst's role hasn't gone away just yet. That ability to understand the problem that we're solving and... translate that into the technology is still a really key skill. Oh,
- Will Dorrington
absolutely. And that's where it brings us back to our other podcast, where actually it should be getting easier, though, because with generative AI in general, it'll be able to, you know, do large portions of that role in a much more efficient framework driven way for those users. So they should all be leveraging Gen AI, but no, none of these roles are going away anytime soon.
- Rufus Grig
Yeah, that does quite neatly lead us onto the risks and cautions, I think. You know, we know that... AI isn't risk-free. We've talked in the earlier episodes about no hallucinations, potential data leakage, bias, those sorts of things. If I'm using a tool like a chat GPT, at least I can see the response it gives me back before I submit it as my history essay or my response to my boss or whatever it is. I've got that human in the loop piece. With agents, I know you absolutely talked about the fact that sometimes there is going to be a human in the loop. You're not going to have it in there at every step. What is the additional thoughts of governance or testing or what sort of controls do we need with agentic AI?
- Will Dorrington
It's becoming the more and more apparent topic, isn't it? It's like what you said, because these are autonomous agents. So we say agents, but actually what people are going towards is autonomous agents. Everything you said, you know, the risks, the biases, the hallucinations, it just amplifies because of the lack of real-time human oversight. And that's the worry, right? Especially when it's customer facing. You know, you rightfully called out all the sort of ethical bias in decision-making. It really is a much more interesting world to navigate. But I think- businesses can sort of navigate this risk the same way as they would any other risk within technology which is ensuring they conduct thorough testing that they do keep a human in the loop they don't just see this as a extreme cost-saving exercise of course it will bring certain costs down but you're still going to want to make sure that that human is overseeing a lot of this especially for high stake decisions so the stuff that you think actually we can get away with that then you know it depends on your level of risk but high stake decisions are i think it's really really key I think explainable AI is going to come more and more and more to the forefront to make decisions clear. And then obviously monitoring those systems continuously, keeping a track. Don't just see it as I'm going to throw this out there into the wild and we'll talk about it later and see what happens. And then, of course, all companies should have ethical AI guidelines and adhere to sort of regulations like the EU AI Act. There's some really sensible stuff there. GDPR, we've always known about. We know and love it. Well, someday. And that just ensures that you're tightening your use of responsible AI rather than just letting it sort of roam around as free as it so wishes.
- Rufus Grig
That's really interesting. And I'm really glad you mentioned, I was going to ask about the ethical considerations. I mean, just the use of the word agency to describe an autonomous system that doesn't have that human in there, clearly there are going to be ethical implications. And things like the EU AI Act, which is a hard thing to say, that's a lot of vowels in a row, are going to become good. guiding principles for ensuring that what we do is in line with society and fitting in with our ethics. Okay, look, that's been really, really interesting talking about this. What should a business do to be preparing themselves for this, making sure they don't get left behind? What are the sort of like first steps?
- Will Dorrington
That's the question of the year, isn't it? So I think, simply put, you need to set up sort of an AI enablement program, which that can consist of, you know, governance structures. nurturing adoption approaches, support models, operational models. But I think going through that on the podcast now will create a podcast in itself. So maybe that's one we bring a bit later. But for now, I honestly think governance is not in place. We need to go beyond the buzzword. And everyone's so excited about this still, but actually identifying those use cases that can receive real benefit for this is still the absolutely crucial sort of point that I keep getting to when I speak to clients that they want AI, they want it now. What do you want it for? I'm not sure, but I really want it. And then looking at that ethical AI, you know, really key, looking at all the integration of how this will work within the ecosystem of not just your technology landscape, but your business as well, because there's a lot of change management around that, and ensuring you've got that skills and training in place, and that scalability, that data security, and everything else that wraps around that. But we are working on, and this is a bit of a plug, I guess, an AI enablement sort of out-of-the-box solution. You know, and hopefully Rufus and I will do a follow up podcast on that at some point, give a little bit of a tease.
- Rufus Grig
I look forward to that. I absolutely do. Before we move on to the questions that we've been sent in, I'm going to ask you to have a look in your hypothetical crystal ball. Where do you think we'll be with agentic AI? I'm going to first ask by this time next year and then three years out. And this is an almost impossible question. And I promise not to come and knock on your door in three years time and say, hey, Will, remember you said this. But give us a bit of a view of the Dorrington thoughts.
- Will Dorrington
Well, I'll keep the Dorrington thoughts only to that question, otherwise it's going to get weird. So I think by the end of 2025, I would hope that we're seeing much more maturity and adoption towards AI in general. And we're starting to see those spurts of agentic AI being adopted. You know, because if we look at it, any technology does take a while to be adopted. First of all, people are freaked out by it and they say, shut it down. Then all of a sudden it's like, how do we adopt this? Then the use case gets stronger, then everybody's using it. And then you go into the governance side and the enablement side. But I also hope to see that some of the sort of people at the forefront are getting so much better at using agentic AI that they're actually using sort of agent to agent type processes. That could be to make those more advanced business processes or even whole departments run a lot more smoother or just to personalize those interactions. I also think our approach to governing and adopting is going to get a lot more stronger with actually AI ethics and explainable AI and transparent AI being at the forefront of that. And we'll see the market flooded with how people are achieving it, which I'm very excited about. Maybe there's still some good ideas.
- Rufus Grig
Thank you. I look forward to that. And I'm absolutely with you around that explainability. I think the key to the public adoption and acceptance of this technology is going to be that transparency and the ability to explain what's gone on. Why have we made this decision? Why is this a reasonable decision to make? I think it's going to be absolutely critical.
- Will Dorrington
No, I agree. And you also asked about a few years out, and I always like to relate that to the five levels of open AI. The first one we've already done, which is chatbot and conversational AI. Then you get to reasoners, which is, you know, agents. So sort of just before agents, actually. So it's where the large language models can actually start solving relatively basic to advanced problems at a doctoral level. And then you get to three, which we're starting to get to now. You've noticed that OpenAI's ChatGPT product is pushing this now, which is the agent side. So ChatGPT have announced that they're going to... start executing tasks for you within that chat GPT application. Then you get to the level above this, which is innovators, which is where the model can actually invent something new that's going to change an industry. Okay, something that's brand new, that's never been done in the world before. That's when we get to level four. So when as soon as OpenAI says we're at level four, that's really exciting, that could have so much implications for the world, especially if it starts breaking boundaries in the medical frontiers, etc. And then... This is the one that I think just blows my mind when you think about it. I don't know if this is three years out or not. I'll let you know in a year and a half what my prediction is. But that's organizations. So these are organizations that are completely and autonomously run by AI, providing a service, a product, whatever it may be. And I think there's something crazy like up to a billion pound in value. I may be hallucinating myself on that one, but I'll have to double check.
- Rufus Grig
Fascinating. Yeah. Really, really interesting. And for anyone who wants to see that five levels, I think it's available on the OpenAI website, isn't it? That paper. Yeah, really good. Thank you. Right. We will, before we finish, turn to a couple of your questions. Really, thank you for sending them in. I must admit, when I asked the questions, I did wonder whether I'd be looking at the tumbleweed on the screen, but we've had definitely more than we've got time to handle. So thank you for sending them in. There's a few that are slightly overlapping. So I'm going to try and sort of bring them together. We had a couple of questions, one from Sanjay and one from Flora around the sustainability and particularly power consumption implications of AI. What's the outlook looking like for bringing down the energy costs for AI?
- Will Dorrington
To be honest, I love that question because it shows people are putting the sustainability AI first. And as we know, it uses a huge amount of electricity, but we see a lot of advancements in hardware efficiency. So, you know, specialized AI chips, you know, GPUs, TPUs that we've seen from NVIDIA, etc. They are becoming more efficient. They are lowering energy consumption. At the same time, AI models are actually, you know, they're trimming the fat off as they go. It is something they're conscious about. So I know one of the techniques is pruning. And I think the other thing is some wonderful way like quantization or something, simplifying the numbers that they use within large language models, etc. I also think, and it's not often called out, but I think it is really important, is that. the application maturity of consultants, of end users is getting more impressive, where they know that they don't need the largest, most powerful models for everything. And actually, a small language model can pack a bloody good punch, and they will leverage that instead, which of course, has a knock on effect to the amount of energy that it uses.
- Rufus Grig
So people won't use more than they need to. They use the model that's required to do this particular job. And that could be one that takes, you know, a lot less process of cycles, or a lot less memory or a lot less energy to burn.
- Will Dorrington
Exactly that. And then if you couple that with the fact that the models are getting more optimized, and if you couple that with the hardware is getting more efficient, as time goes on, the energy efficiency will arrive.
- Rufus Grig
Interesting, because I mean, we've seen big public infrastructure announcements. We've had them in the UK, we've had them in the US. And there is a lot of money going into these infrastructure projects. And I guess, managing them in an environmentally responsible way is going to be really critical as well in terms of the way it would be. build the data centers and source the technology. Great question. Thanks for the question. Another question. This one is from Alfie. He says, how good is the code generated by Gen AI? So when you're asking to write software, it says, with good prompts, does Gen AI produce code that's instantly applicable? Or does a human programmer then have to go and edit it? And he's got a follow-up question, which says, is the code produced more or less the same as what a human programmer produces? might have come up with or are we seeing really new and innovative coding techniques
- Will Dorrington
That's a really good question, actually. And I think it all comes down to the tool you're using, right? So if you go into chat GPT, and that's what you're using, you know, as your designer buddy, and all of a sudden, you know, it's spitting out this code, it's not going to be as efficient, you know, and I will get a lot of arguments on this, but I promise you, as going to a codex model, which is using a large language model that has then been trained by billions of lines of code from GitHub and others, to then actually surface and its only purpose in life is to generate code. So using GitHub Copilot, then using that a purpose made tool is going to be much more efficient and it's going to have a lot more features to help you. But as we always say with these things is you still really, really, and I can't emphasize this enough, need to be a subject matter expert on this to be able to look at that and go, that looks really good to me because it's still representing yourself and your work and it still hallucinates. It still has biases and it will still not be 100% all the time. But the feedback I've had. from the developers inside Curve that uses GitHub Copilot and other tools is it's been fantastic, especially with some of the tedious tasks, the testing, et cetera, et cetera. So it's definitely speeding up productivity of the developers. And with time, it'll get better and better. For my personal experience, it provides real quality results. And where I love to use it is like today, not feeling 100%. I'm having to drink lots of coffees, feeling a bit tired, and it's doing that heavy lifting for me, which is fantastic.
- Rufus Grig
It's similar. There's another one on the topic of coding, actually, that says if it's helping programmers, legitimate programmers with their programming tasks, are we seeing bad actors using Gen AI to develop malicious things?
- Will Dorrington
Absolutely. Whenever you get a powerful tool, people will use it for good. People will use it for bad. The ability to now do much more personalized scamming, et cetera, has just gone through the roof. But that's the case of our world, unfortunately. As soon as something comes out. There's all sorts of people that make up the world and everyone will use it.
- Rufus Grig
And it's really interesting because there's another question, a great question from Ed, which says, what are the downsides of Gen AI and how do we mitigate them? And I guess this is a really, this is a killer one. I was just thinking of phishing emails. So those emails that you get that try to lure you into clicking onto something and either providing just your personal details or credit card details. And typically they were really easy to spot because the language was bad, the grammar was bad, the spelling was bad. The formatting was bad. And, you know, you could spot them a mile off, you know, Royal Mail spelt M-A-L instead of M-A-I-L or something. Gen AI, you know, can write really sophisticatedly. It can write better language than a lot of people, than me on a bad day. And it can also format things brilliantly. You know, it basically makes much more realistic and believable phishing emails, which I think means we really have to up our game. from a sort of information security, cybersecurity, in terms of being able to protect against those things. It's a bit like providing the bad actors with good software skills. It's providing them with good English literature type skills as well. That's definitely something that we're seeing really regularly. It's having a significant impact.
- Will Dorrington
But with that, you know, we are seeing tools that will then fight against that, that are powered by AI as well. And that's always the way as anything advances.
- Rufus Grig
And then I suppose on...
- Will Dorrington
maybe even achieving a bit more public infamy has been some of the deepfake images and videos during the uk general election campaign there were videos of multiple political leaders saying things that they they never actually said and you know yeah misinformation and manipulation is that you know it's getting scary i mean if anyone followed cambridge analytic and the crisis that happened there around the brexit campaign and the first round of the uh the trump campaign as well that can now be done automatically they don't need to worry about going after these sort of big hubs of very clever data scientists and data engineers and social engineers, a lot of it can be automated. And that's exactly what we've seen. It's an interesting world to be in.
- Rufus Grig
Very interesting world to be in. But there's one really nice question to end with, I think, which is, as a young person interested in AI, what are the best things to be reading, to learn about the topic? And what are some tips for breaking into the industry as a young professional?
- Will Dorrington
It's a really good question. I think one of the best skills you can equip yourself with nowadays, and... It's often under looked, people think it's easy. It's writing a bloody good prompt.
- Rufus Grig
Knowing how to write a prompt and how to get the most out of these tools from a user perspective will be half the success then to understanding them and break it into the area. If you want to go deeper than that to more the technical approach, it depends if you want to be a data engineer, data scientist, data visualization expert, or even a developer. And all those have their own different paths. Going into data science and actually starting to look at the history of AI and how it's got to the large language model. So looking at how when Google first published that word to vectorization, you know, then came out that attention is all you need. There's a ton of literature out there that will start allowing you to build up how do these colossal foundational models actually work. And it's fascinating. It's just phenomenal. But start off small, start with understanding actually how to use them first and then break into the sciences. My recommendation.
- Will Dorrington
Brilliant. Thank you, Will. And a recommendation from you is a very worthwhile recommendation indeed. Will, thank you as always. It's been absolutely brilliant chatting with you. I know you've got a ton of stuff to be getting on with, so really appreciate your time. Thank you.
- Rufus Grig
No, I love coming on these. I really do, mate. So thank you for having me.
- Will Dorrington
Brilliant. So if you've been interested in what we've had to say, do please get in touch and tell us what you think. You can find out more about Curve and about AI in general by visiting us on curve.com. And do please listen out for the next episode. You can subscribe. You can get hold of us on the podcast platform of your choice. Do tell your friends. Thank you. also to Kim and for Ed to keep this thing on the road and to all the guests we've had on this first series. Thank you very much for listening and goodbye.