undefined cover
undefined cover
AI for all: building technology that leaves no one behind cover
AI for all: building technology that leaves no one behind cover
FinTrends

AI for all: building technology that leaves no one behind

AI for all: building technology that leaves no one behind

35min |23/06/2025
Play
undefined cover
undefined cover
AI for all: building technology that leaves no one behind cover
AI for all: building technology that leaves no one behind cover
FinTrends

AI for all: building technology that leaves no one behind

AI for all: building technology that leaves no one behind

35min |23/06/2025
Play

Description

In this powerful episode, we dive into how gender bias in AI isn't just a tech problem—it's a sustainability issue, too. Join Bettina (Head of Sustainability at SBS), Dana (AI & Data Analytics Lead at SBS), and Shalien (Head of Product Design at SBS)—as they break down the hidden connections between inclusive technology and environmental impact : Why does AI often disadvantage women and marginalized communities? How can fairer algorithms also be greener? What are banks doing to design apps that really work for everyone?


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Transcription

  • Speaker #0

    Welcome to FinTrans, the podcast series where we explore the hot trends and news in the financial sector with experts. Today, we are unpacking a complex but crucial topic, gender bias in AI, and its unexpected connection to sustainability. And to explore this, I am joined by experts at SBS, Bettina Vaccaro-Carbon, Head of Sustainability, Chalene Kishore, Head of Product Design, and Danelle Lundbury, Head of Data. analytics and AI. Together, we will dive into how bias shows up in financial technology, how inclusive design can make a difference, and why building fairer, greener AI is not just the right thing to do, but also the smart thing. Before we get started, please everyone introduce yourself.

  • Speaker #1

    Hi, my name is Bettina. Thank you, Caroline, for having us today. We've already started this discussion a couple of months ago about our work in digital sustainability and sustainability as a whole. So my role at SVS is Head of Sustainability, which means that I cover all of the CSR topics within our company, and in particular, a topic that is quite dear to our heart, which is digital sustainability.

  • Speaker #2

    As for me, I'm Shaleen Kishore. I lead product design for SPS. I'm an architect by degree. I'm a creative at heart and a design leader by choice. And during the last decades of my career, I've worked with different kinds of businesses, as well as large global companies. I've traveled extensively. And professionally, I'm looking for the most frequent and most frustrating pain points to design solutions for them. Gender bias, is one such example. And in a patriarchal society, in a male-dominated professional environment, it becomes all the more important to address it and find solutions so that we do not take it forward as it has been in the past.

  • Speaker #3

    And I'm Dana Lenbury, and I'm currently leading data analytics and AI initiatives at SBS. So in my role, what I get to do is drive essential data initiatives that are helping our product lines and our clients leverage data and AI effectively. And I've been working in the financial services sector for nearly 20 years. And before joining SBS about three years ago, I led work in data science, utilizing AI and machine learning and statistics to generate business value and drive innovation for financial institutions. Really great to be on this podcast. Thank you for having me.

  • Speaker #0

    So let's start with the big picture. AI is becoming more embedded in our daily lives, from hiring to lending, and so do the risks of reinforcing old inequalities. But beyond fairness, there is a surprising link between bias and sustainability. Can you tell us how are these two connected?

  • Speaker #1

    From our sustainability... point of view, I think that is something that's something that we need to keep in mind and that we often miss is that sustainability is not just about the environment or being green or maintaining resources. It's also about how we create an economy that as a whole is going to be able to continue growing. And that is, it's not that it is inclusive, but being inclusive is part of the ways in which we support continuous growth and an economy that is going to be lasting in the long term. If you take the world. 50 years ago, and it's a bit difficult to say it in the current moment, but if you take it 50 years ago, we were excluding almost 50% of the population, which of course is going to have an impact in the way our economy can grow as a whole. And I feel like I say economy every two words, but the idea is that we cannot put to the side a significant part of a population. And of course, this is women in the bias that we're talking about today, but this is also about Um. other underserved communities. It can be about minorities in general. It is also about people with handicaps and things like that. So every time we exclude a part of the community, we are reducing our capacity to grow by limiting the resources we can actually harness.

  • Speaker #3

    When we're thinking about gender bias in AI, we can be talking about the ways in which the AI systems are producing unequal outcomes that are based on gender. So often this is unintentionally, but it has real world consequences. Now, when we start to think about the two topics together, so gender bias in AI and how does that actually relate to environmental sustainability, This is actually a very powerful yet. Overlook question a lot of times. So gender bias and AI environmental sustainability, they might seem like completely separate issues, but they're actually deeply interconnected, even at the root of where they come from. And we can see this through the lens of equity, impact and systemic design. So let me talk a little bit about the roots of these two issues. So the reality is that they share a certain. history when it comes to systemic inequality. So both gender bias in AI and environmental degradation are symptoms of systems designed without inclusive representation. So AI systems, they're often reflecting the values of the dominant group, often ignoring women and marginalized communities. In fact, the environmental costs of AI, so we can think about water depletion, energy use, all of these things are often falling on low-income or indigenous communities located near data centers or mining operations. These are communities that are frequently underrepresented in AI design and governance. And then when it comes to climate stress, so we can think about here food security and migration, the insecurity around food rather, this is often hitting women and other marginalized communities hardest. And AI systems that aren't accounting for this, they can actually amplify the vulnerability of these populations through biased resource allocation or the exclusion from support systems. So some of these connections are just so deeply rooted. I think it's really important that we think of them in that connection when we talk about it. And there's so much we can learn from thinking about that connection as well.

  • Speaker #2

    Thank you, Dana. Thank you, Bettina, for the comprehensive understanding of gender bias in AI. I'd like to highlight first that AI is very, very powerful. It's a powerful tool, just like television, like books, like movies, but it is only a tool. And we still need to use our wisdom, our critical thinking to utilize information and stories that are coming from all the sources, including AI. Now, if we talk about financial technology, The finance, over 70% women have bank accounts, only one in three actively use their accounts. So a majority of women access banking services via men in their family. Financial literacy is low and so is technical literacy. It's not promoted in women in any geography. And as an example, women are 8% less likely than men to own a mobile phone, especially in low and middle income countries. Even though computing is evolving with machine learning and generative AI, the problems that are coming with that data that we have been gathering so far, it's still biased and it's high time that we take corrective measures. Because the way we're going to shape technology today, it'll shape the behavior, the routines, the values in future. And we want to make it good. We want to make it sustainable. We want to make it scalable.

  • Speaker #0

    Before we can tackle the issue, we need to understand it. So what do we mean by gender bias in AI and how does it manifest, particularly in sectors like banking and fintech, where data and algorithms increasingly shape decisions?

  • Speaker #3

    Yeah, so when we're talking about gender bias in AI, we're thinking about the ways in which the AI systems are. producing and reproducing unequal outcomes based on gender. And this is often with unintentional consequences that's happening in the real world. So what's happening in financial institutions and across the industry, it's something that we see matters very deeply because we are using AI to make decisions about so many things. So we can be thinking about the credit scoring. The loan approvals, insurance pricing, fraud detection, all of these things and decisions that are directly impacting people's everyday lives and, of course, their financial futures. The problem related to this bias that we're seeing. is really starting with the data. So if AI is trained on historical financial data, it's reflecting past discrimination. So say, for example, women historically have been granted smaller loans or have had limited access to credit. And in some countries, it's still illegal. So lots of discrimination throughout the ages in all countries, really. Then because of this data and the slant that we see in the data, We know that AI is learning these patterns and they're replicating them, even if gender isn't explicitly included as a variable in the data set. So, for example, there was a well-known case a few years ago where women were being offered dramatically lower credit limits than men, even when they had a similar financial profile. And that's really a great example, I would say, of bias showing up in a seemingly neutral algorithm. It's also showing up in more subtle ways as well. Behavioral data is there, things like shopping behaviors and patterns and even browser history. These can all carry gendered assumptions that end up influencing credit decisions. So investment platforms might unknowingly, for example, cater more to a male risk profile and then leave women underserved. And if the data is used to train these systems, If that data is underrepresenting women or non-binary individuals, the models just aren't going to perform as well for them. That's not just a technical issue, but it's really an inclusion issue.

  • Speaker #1

    I think what is interesting and important to keep in mind is that, as Dana was saying, there are two aspects in which this is going to be impacting women and minorities in general. One is that in the decisions that are made. as we believe that going through AI is going to take out the human bias. We're actually leaving more space to bias based on historical data and decisions that were made previously with a different view of things, but also because there is those blind angles and the impact that it has on the side, because as Shalini was saying, women are less likely to have mobile phones, are less likely to be active in their financial life, which means that when you take both of this aspect you're just increasing the risk of putting people to the side or not being inclusive enough and once again the risk is not having the capacity to solve today's problems by looking in a world in in an historical way instead of like changing or triggering that movement if we add another thing because we are obviously talking about the use of financial tools but the ai is is not only used within the financial tour, it's also used. for recruitment and it's also going to be used for trying to train and do financial literacy. So it really impacts the whole chain of value if we don't try to think about how we can change these kind of things.

  • Speaker #2

    Just two cents from my side. So at SPS, we have the digital sustainability pillars of accessibility and eco-design. And accessibility is not just about disability, it's also about how can different genders people of different age groups and digital literacy access the same systems. So, for example, high-pitched voices or ethnic accents, it's difficult for the voice recognition systems to recognize them. Or sometimes even there are higher error rates for women and people of color on facial recognition because the data itself, the data collection, the data curation, it was imbalanced. And that's where we need to start solving, not just from a technical perspective, but from a human perspective.

  • Speaker #0

    Could the way we build and run our digital tools actually make them fairer? In other words, can digital sustainability help us fight bias in AI systems?

  • Speaker #1

    I will start by re-explaining a little bit what is digital sustainability and then I'll... I led by peers who are better in the technical part, explain how we are actually using this to improve the way we work. But digital sustainability, as Shilin said, actually has four pillars, and NSVS works specifically with two of them. So the four pillars are going to be green IT, so how you make sure that your system are mindful of the impact they have in the environment, and you're going to be looking for improved performance, but also to make sure that The conditions. that you're offering are in line with the needs. So to give you an example, a client that might not be as educated in this kind of thing will tell you like, I need 100% availability for my lending rates. And you're like, well, in reality, you're only using your lending rates once a month to recalculate your interest rate for the loans ongoing. So maybe you don't need this. information available 100% of the time, 24-7. And probably some technical people will improve the way I explain this, but the idea is that you also need to dimension the services in line with the need of the services and not just with an idea of what it should be. Because as of today, and it is the same as personal views, like we want Amazon to deliver things in our door in an hour, but do we really need something in an hour to deliver every time? Not really. It's just like we have grown used to this way of working. So green IT is really about making sure that we design in a way that is in line with the needs, but that is mindful of reducing the impact that we have on the environment. Because as you know, it's not just about CO2. It is about water stress, as they were mentioning, and the impact that it will have in the communities. Because if we don't have enough water, we don't have enough food and all of those things. So that is one part. Then you have IT for green, but it's also preventive maintenance. and making sure that you update and upgrade in the right time and not just all the time ahead or after breakage, because that is also having an impact in the way we work. You have accessibility, which as Janine was saying, is not only about disability, but it's also about making sure that our software are available in low debit communities, because if you only have mobile bankings, you need to also be able to access your banking, even if you only have 3G or less. And it's also about if you're elderly and maybe you don't have the same access or capacity to use a phone, like how do you still get access to your banking services? And then you have your ethics pillar, which is about, and this is where it comes to what we're talking is about being mindful and thinking about how technology has an impact in the life of people. And not just thinking that because it's technology is going to be neutral and not going to have an impact. And that is where we're going to be able to. coming to things like an AI code of ethics and how much freedom we give to the AI or how we can think about what we are going to fit into the models in order to make sure that they respond to what we're expecting. And that's where I'm going to leave the floor to my peers who will be able to explain this even better than me.

  • Speaker #3

    I can talk about this from a technology management perspective when I did my PhD on this topic. And something that kept coming up is about the role that technology plays, right? So technology actually has certain affordances and we can think of them in this idea of the opportunities and constraints that we can design into the technologies themselves to get the outcomes we want. So for example, when we think about designing AI more inclusively, we can factor in different diverse voices, especially women, and know that. the technology will become more socially responsible and environmentally aware. And some of the things of those principles we can be thinking about when we are looking to reduce that gender bias and environmental degradation that's being perpetuated by AI systems today, we can be thinking about three ways or three techniques, so to speak. So first of all, we can think about how do we compensate? How do we change the technology to compensate for the harm that we're seeing? So we're using AI technologies, for example, to compensate for those negative outcomes of environmental degradation and gender bias by adding something positive. So it's about using technology to deliver something positive that's going to offset what's being negatively outcombed in that situation. So, for example. We may need to create data that has more gender balance than what was typically found in the data sets from the real world, right? So if we're missing data from women, we need to start creating synthetic data, maybe gathering more data from women to actually compensate for that situation. The second principle I can mention here is transformation. So how can we fundamentally change the technology, right? So the use of AI technologies to transfer. form negative outcomes and change that into something more positive. So using the technology at a more fundamental level to change what was that that was negative, right? So this bias we're talking about, the degradation of the environment. And we can design AI systems to fundamentally change the interaction between the AI systems and the users. So right now, users are actually really struggling with prompt engineering. It's hard to craft prompts. to limit the bias. when it comes to gender. It's very difficult. And the responses back also aren't providing the transparency needed for the users to understand their data sources. They're not understanding the interpretation of their prompts. And there's potential for bias to be creeping in at any stage. And users aren't seeing that. What is the bias? Where is it coming from? And so we can actually rethink the interaction entirely by giving users the tools that they need for prompt engineering and to enhance transparency so they can make decisions that will combat these issues. And we can transform these systems to add both enablers and constraints that would be useful for reducing gender bias and to help improve our ecological footprint. Now, the last principle I can mention here is counteraction. This is about the use of AI technologies to counteract negative outcomes. So we're using AI technology, for example, to prevent the occurrence of what is currently negative. So the bias again, and the degradation of the environment. So imagine in this situation, advanced AI systems that sit alongside developers and flag bias and recommend inclusive data sets and auto adjust models to reduce harm, like kind of like having an ethical co-pilot right next to you, right? that's going to evolve with every line of code and every prompt that you give. So those are some examples based on some of those technology management principles.

  • Speaker #2

    And as SBS design perspective from an inclusive design for banking software, we can bring in more universal design principles into the process itself. So for example, we can eliminate the biased assumptions and user flows. We could design with multiple personas that includes women entrepreneurs or gig workers or single parents and rural users. We could remove the gendered assumptions and product features. So, for example, we don't need to assume that only men are the primary bread earners or decision makers. We can use accessible interfaces and interactions for all abilities and literacies, financial literacy and technical literacy. For example, we can use plain. gender-neutral language. We can use visual aids that make it more inclusive. We provide readers, screen readers, text resizing, voice-assisted navigation. We can offer local language support, so especially in the regions where women or older users may be less fluent in the default language. And then we can make gender-aware credit or risk models, so the users are not penalized for gender or a marital status or a caregiving role. We can incorporate alternative data sources like the mobile payments, the community group behavior to assess the creditworthiness. So AI opens up the possibilities to make things more comprehensive and more inclusive because our processing power is immense and we can utilize that to make things more gender neutral.

  • Speaker #1

    One of the things that we can also do, and picking up on what Celine was saying before, as SBS, and that is something that we're currently working on, is also support our customers in better understanding these kind of things, because we have started this journey a couple of years ago, and although different people are at different rhythms, I think it is a unique opportunity that we have today to also have these discussions with our customers and help them be more mindful, and how they can use our technology in that sense, as Celine was saying. in trying to incorporate different ways within origination to make sure that they are more inclusive or reducing the idea that maybe the man is the sole breadwinner and things like that. So because of the role we play in our ecosystem, we also can be advisors and we can support this growth on our customer side. And it is something that we have started with some of our customers where we're giving digital sustainability. lessons or digital sustainability courses in order to be able to interpret and integrate these kind of things within their business processes and thinking.

  • Speaker #0

    Okay, so here is a bigger question. Can we create AI that's not just smart and efficient, but also good for people and the planet?

  • Speaker #1

    I think it's not so much about energy efficiency and responsibility. I will always go back to the same thing. We need to be aware of the problem and then we need to work to our solution. as long as we're using resources, as long as we're developing software, as long as we're consuming, we're still going to be using and consuming resources. But what matters is to think about the need. And for example, we do have within SBS a person who is responsible for studying the impact of AI in the environment and who we are creating a framework on how to make decisions in the way we implement today to make sure that we also stop the models before going too far because we do know that the further the AI model is trying to look for a better answer the higher the impact and the cost in energy and resources however the improvement becomes minimal as we move along so there are real questions about how to choose which model depending on which case we're using and how far we want to go in the research and all of those things are about understanding the problem, understanding our objective and trying to put the right level of solution. So it is about being in the right amount.

  • Speaker #2

    Equilibrium.

  • Speaker #1

    Thank you, Shadeen.

  • Speaker #2

    I would flip that question a little bit. So we should not say, is it possible? We should ask the question, how do we make it possible? Because it's increasingly essential to build AI systems that are both energy efficient and socially responsible. So we have to make deliberate design choices. We have to set up regulatory frameworks. And we have to have a shift in our mindset from the bigger and faster to what's fairer and smarter. So whether we're talking about optimizing the model architecture, as Bettina just talked about, or we are working into green AI training practices, or even edge computing on device AI, so that we save resources and the back and forth. We need to figure out solutions to make it. energy efficient for sure, because we know it's going to go big and it's going to get bigger and bigger as we move into the future.

  • Speaker #3

    Yeah, I would completely agree, Shaleen. And I think too, as we think about how do we reduce and remove bias in AI and the data, underlying data, we know that we need to use frameworks, we need to be auditing, we need to be retraining and retraining our models, right? All of these things are going to have an impact. on the environment. And the situation is once we start gaining these best practices and standards and we can apply them in an efficient way, that's really the best way to go about it is not to say, no, we've got to minimize how we're leveraging AI models and actually cut out things that are going to make it better, especially when it comes to reducing bias. But we have to be thinking smarter. and working smarter so that we do have less of an impact on the environment.

  • Speaker #0

    From a business point of view, why should companies actually care? Does tackling AI bias and going green really make business sense or is it just about doing the right thing?

  • Speaker #1

    As of today, there's a bigger interest. for our clients and this kind of things and they are numbers that shows that this is a question that is asked more and more in RFPs and RFIs. We also know that for example in Europe the CSRD which is the global CSR regulation at the moment is asking us to be accountable for all of the value chain which means our customers and our suppliers and from a business perspective being able to deliver. on these kind of things, being able to answer to this question and making sure that we have the right level of implementation in these kind of topics is going to allow us to have at least A, answer to the requirements, but also be able to have a differentiator in the type of products that we sell. We know that accessibility is a differentiator today in a lot of places. And for our customers, being able to propose these kind of services and being able to showcase that they're working toward this is also a differentiator. differentiator towards their customers. So it creates a value creation positive cycle that I think it is important to keep in mind. So in a very cynical and like money driven way of saying the world doing this actually make us more efficient, more performance. So we have a bigger profit because our products are spending less money in the way they work, but we're also getting more clients and more business. So from a business perspective, is very easy to justify. And I'm sure that Angeline will add some interesting points.

  • Speaker #2

    I fully agree. So energy efficient AI models, it lowers operational costs. It increases efficiency.

  • Speaker #0

    It's straight away good business. It's a leaner infrastructure. It's the optimization of storage, lower maintenance expenses. So yes, everything improves the operational costs. Plus, it's a competitive advantage and drives innovation because the moment you make a team that's focused on ethical AI and sustainability in the long run, you unlock creativity, you unlock future-proofing of solutions, and yeah, you get ahead essentially with diverse and inclusive. dreams. So there was a study with PCG. It shows that a 19% of innovation boost in companies with above average levels of diversity. So yes, it makes business sense. It makes innovation. It gives you a competitive advantage.

  • Speaker #1

    So what now? What do we do with all this? If you're a developer, a designer, or... it. making decisions that shape tech. How can you help build AI that is not just smarter, but also fairer and more sustainable?

  • Speaker #0

    So yes, it's a shared responsibility. And as people who are designing and developing AI solutions, we need to ensure that our systems are fair, transparent, unbiased. One, we need to use diverse teams and representative data sets. We need to regularly conduct the audits. We need to implement explainable AI so that we can cross-question it and understand how the decisions are being made. And we need to advocate for diversity in AI teams so that we're reducing blind spots.

  • Speaker #2

    Yeah, and AI certainly has a role too, exactly what Shalina is saying. And when we add on a layer of AI to do AI governance, for example, how can the technology itself be used. to all of these objectives that we've been talking about throughout this. I think that would be extremely powerful.

  • Speaker #3

    And I will just come back to what Shalit was saying in the beginning, which is like, we need to keep thinking that this is a tool. And as a tool, it is something that we need to govern. So we need to implement the framework. We need to have the right people working on it. And we need to make sure that we do the check that make us follow up and make sure that things are working the proper way. is really also our responsibility to make sure that this answers to our needs.

  • Speaker #1

    So to wrap things up, we've seen that building better AI isn't just a nice idea, it's something we have to do. And it's doable, so that's a good news, but it's just a start. So according to you, what will it take to keep pushing for technology that serves everyone equitably and responsibly?

  • Speaker #0

    So whether we like it or not, AI is going to shape the next generation's worldview. If it is biased, so is our future. So we have to tackle it, and we have to tackle it now. So building a world where people see limitless possibilities, regardless of their gender, caste, creed, color, anything, we need to make a fairer world. Because it's not just a technical flaw, it's a societal issue. And the question is not whether... AI can be fair. It's whether we have the will to make it so.

  • Speaker #2

    And I think this issue around governance and making sure that IT systems and AI systems are governed in the right way to get the right outcomes, as Shaleen's saying, that we do have a fair world and equitable outcomes. And all of these things is so critical and important. We want to see that governance standards continue to rise. and that they get implemented as best practice in every industry. So that's something that we're already seeing a lot of change on, which is great, but it's something that we have to keep pushing for and keep advocating for. And there's a role for everyone to play in this because AI touches everyone's lives, right? So all of us are responsible for getting our feedback back to companies, making sure we get the transparency we need, making sure that as systems progress, we are

  • Speaker #3

    we're being critical are we being treated fair are we pushing to create the world that we want to see i think to to highlight angeline's and dana i think we are all responsible for this so one of the aspects we haven't mentioned because we're talking really about the company but is about how everybody needs to be educated in this and how we need to try to make sure that every person whether it's ourselves our family our kids our community is aware that This is something we are all responsible for and that we keep advocating and expecting this from our companies, our governments, the schools where we are everywhere. We need to be, once again, and this is a word I use a lot, but we need to be mindful of the role we play in this game. Because if we are not asking the questions, if we are not being there, if we are not trying to change things and voice out the need that we have to. make this an important topic and have it on the table, it just gets thrown away with other things. And in a very cynical world where we are driven by financial market indicators, we also need internally to be able to continue proving that this brings value and that it makes sense. So that means knowing what we're following. It means having KPIs that we can showcase and being able to track these evolutions in order to make sure that this is taken seriously at every level, even though already... being there and voicing it out is an important part of the journey.

  • Speaker #1

    Thank you so much Bettina, Chaline and Dana for sharing your insights. It's been a very thoughtful conversation.

  • Speaker #0

    Thank you, Caroline.

  • Speaker #3

    Thank you. Yeah,

  • Speaker #2

    thank you so much.

  • Speaker #3

    Always so fun to be with you guys talking about this amazing topic.

  • Speaker #0

    Always.

Description

In this powerful episode, we dive into how gender bias in AI isn't just a tech problem—it's a sustainability issue, too. Join Bettina (Head of Sustainability at SBS), Dana (AI & Data Analytics Lead at SBS), and Shalien (Head of Product Design at SBS)—as they break down the hidden connections between inclusive technology and environmental impact : Why does AI often disadvantage women and marginalized communities? How can fairer algorithms also be greener? What are banks doing to design apps that really work for everyone?


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Transcription

  • Speaker #0

    Welcome to FinTrans, the podcast series where we explore the hot trends and news in the financial sector with experts. Today, we are unpacking a complex but crucial topic, gender bias in AI, and its unexpected connection to sustainability. And to explore this, I am joined by experts at SBS, Bettina Vaccaro-Carbon, Head of Sustainability, Chalene Kishore, Head of Product Design, and Danelle Lundbury, Head of Data. analytics and AI. Together, we will dive into how bias shows up in financial technology, how inclusive design can make a difference, and why building fairer, greener AI is not just the right thing to do, but also the smart thing. Before we get started, please everyone introduce yourself.

  • Speaker #1

    Hi, my name is Bettina. Thank you, Caroline, for having us today. We've already started this discussion a couple of months ago about our work in digital sustainability and sustainability as a whole. So my role at SVS is Head of Sustainability, which means that I cover all of the CSR topics within our company, and in particular, a topic that is quite dear to our heart, which is digital sustainability.

  • Speaker #2

    As for me, I'm Shaleen Kishore. I lead product design for SPS. I'm an architect by degree. I'm a creative at heart and a design leader by choice. And during the last decades of my career, I've worked with different kinds of businesses, as well as large global companies. I've traveled extensively. And professionally, I'm looking for the most frequent and most frustrating pain points to design solutions for them. Gender bias, is one such example. And in a patriarchal society, in a male-dominated professional environment, it becomes all the more important to address it and find solutions so that we do not take it forward as it has been in the past.

  • Speaker #3

    And I'm Dana Lenbury, and I'm currently leading data analytics and AI initiatives at SBS. So in my role, what I get to do is drive essential data initiatives that are helping our product lines and our clients leverage data and AI effectively. And I've been working in the financial services sector for nearly 20 years. And before joining SBS about three years ago, I led work in data science, utilizing AI and machine learning and statistics to generate business value and drive innovation for financial institutions. Really great to be on this podcast. Thank you for having me.

  • Speaker #0

    So let's start with the big picture. AI is becoming more embedded in our daily lives, from hiring to lending, and so do the risks of reinforcing old inequalities. But beyond fairness, there is a surprising link between bias and sustainability. Can you tell us how are these two connected?

  • Speaker #1

    From our sustainability... point of view, I think that is something that's something that we need to keep in mind and that we often miss is that sustainability is not just about the environment or being green or maintaining resources. It's also about how we create an economy that as a whole is going to be able to continue growing. And that is, it's not that it is inclusive, but being inclusive is part of the ways in which we support continuous growth and an economy that is going to be lasting in the long term. If you take the world. 50 years ago, and it's a bit difficult to say it in the current moment, but if you take it 50 years ago, we were excluding almost 50% of the population, which of course is going to have an impact in the way our economy can grow as a whole. And I feel like I say economy every two words, but the idea is that we cannot put to the side a significant part of a population. And of course, this is women in the bias that we're talking about today, but this is also about Um. other underserved communities. It can be about minorities in general. It is also about people with handicaps and things like that. So every time we exclude a part of the community, we are reducing our capacity to grow by limiting the resources we can actually harness.

  • Speaker #3

    When we're thinking about gender bias in AI, we can be talking about the ways in which the AI systems are producing unequal outcomes that are based on gender. So often this is unintentionally, but it has real world consequences. Now, when we start to think about the two topics together, so gender bias in AI and how does that actually relate to environmental sustainability, This is actually a very powerful yet. Overlook question a lot of times. So gender bias and AI environmental sustainability, they might seem like completely separate issues, but they're actually deeply interconnected, even at the root of where they come from. And we can see this through the lens of equity, impact and systemic design. So let me talk a little bit about the roots of these two issues. So the reality is that they share a certain. history when it comes to systemic inequality. So both gender bias in AI and environmental degradation are symptoms of systems designed without inclusive representation. So AI systems, they're often reflecting the values of the dominant group, often ignoring women and marginalized communities. In fact, the environmental costs of AI, so we can think about water depletion, energy use, all of these things are often falling on low-income or indigenous communities located near data centers or mining operations. These are communities that are frequently underrepresented in AI design and governance. And then when it comes to climate stress, so we can think about here food security and migration, the insecurity around food rather, this is often hitting women and other marginalized communities hardest. And AI systems that aren't accounting for this, they can actually amplify the vulnerability of these populations through biased resource allocation or the exclusion from support systems. So some of these connections are just so deeply rooted. I think it's really important that we think of them in that connection when we talk about it. And there's so much we can learn from thinking about that connection as well.

  • Speaker #2

    Thank you, Dana. Thank you, Bettina, for the comprehensive understanding of gender bias in AI. I'd like to highlight first that AI is very, very powerful. It's a powerful tool, just like television, like books, like movies, but it is only a tool. And we still need to use our wisdom, our critical thinking to utilize information and stories that are coming from all the sources, including AI. Now, if we talk about financial technology, The finance, over 70% women have bank accounts, only one in three actively use their accounts. So a majority of women access banking services via men in their family. Financial literacy is low and so is technical literacy. It's not promoted in women in any geography. And as an example, women are 8% less likely than men to own a mobile phone, especially in low and middle income countries. Even though computing is evolving with machine learning and generative AI, the problems that are coming with that data that we have been gathering so far, it's still biased and it's high time that we take corrective measures. Because the way we're going to shape technology today, it'll shape the behavior, the routines, the values in future. And we want to make it good. We want to make it sustainable. We want to make it scalable.

  • Speaker #0

    Before we can tackle the issue, we need to understand it. So what do we mean by gender bias in AI and how does it manifest, particularly in sectors like banking and fintech, where data and algorithms increasingly shape decisions?

  • Speaker #3

    Yeah, so when we're talking about gender bias in AI, we're thinking about the ways in which the AI systems are. producing and reproducing unequal outcomes based on gender. And this is often with unintentional consequences that's happening in the real world. So what's happening in financial institutions and across the industry, it's something that we see matters very deeply because we are using AI to make decisions about so many things. So we can be thinking about the credit scoring. The loan approvals, insurance pricing, fraud detection, all of these things and decisions that are directly impacting people's everyday lives and, of course, their financial futures. The problem related to this bias that we're seeing. is really starting with the data. So if AI is trained on historical financial data, it's reflecting past discrimination. So say, for example, women historically have been granted smaller loans or have had limited access to credit. And in some countries, it's still illegal. So lots of discrimination throughout the ages in all countries, really. Then because of this data and the slant that we see in the data, We know that AI is learning these patterns and they're replicating them, even if gender isn't explicitly included as a variable in the data set. So, for example, there was a well-known case a few years ago where women were being offered dramatically lower credit limits than men, even when they had a similar financial profile. And that's really a great example, I would say, of bias showing up in a seemingly neutral algorithm. It's also showing up in more subtle ways as well. Behavioral data is there, things like shopping behaviors and patterns and even browser history. These can all carry gendered assumptions that end up influencing credit decisions. So investment platforms might unknowingly, for example, cater more to a male risk profile and then leave women underserved. And if the data is used to train these systems, If that data is underrepresenting women or non-binary individuals, the models just aren't going to perform as well for them. That's not just a technical issue, but it's really an inclusion issue.

  • Speaker #1

    I think what is interesting and important to keep in mind is that, as Dana was saying, there are two aspects in which this is going to be impacting women and minorities in general. One is that in the decisions that are made. as we believe that going through AI is going to take out the human bias. We're actually leaving more space to bias based on historical data and decisions that were made previously with a different view of things, but also because there is those blind angles and the impact that it has on the side, because as Shalini was saying, women are less likely to have mobile phones, are less likely to be active in their financial life, which means that when you take both of this aspect you're just increasing the risk of putting people to the side or not being inclusive enough and once again the risk is not having the capacity to solve today's problems by looking in a world in in an historical way instead of like changing or triggering that movement if we add another thing because we are obviously talking about the use of financial tools but the ai is is not only used within the financial tour, it's also used. for recruitment and it's also going to be used for trying to train and do financial literacy. So it really impacts the whole chain of value if we don't try to think about how we can change these kind of things.

  • Speaker #2

    Just two cents from my side. So at SPS, we have the digital sustainability pillars of accessibility and eco-design. And accessibility is not just about disability, it's also about how can different genders people of different age groups and digital literacy access the same systems. So, for example, high-pitched voices or ethnic accents, it's difficult for the voice recognition systems to recognize them. Or sometimes even there are higher error rates for women and people of color on facial recognition because the data itself, the data collection, the data curation, it was imbalanced. And that's where we need to start solving, not just from a technical perspective, but from a human perspective.

  • Speaker #0

    Could the way we build and run our digital tools actually make them fairer? In other words, can digital sustainability help us fight bias in AI systems?

  • Speaker #1

    I will start by re-explaining a little bit what is digital sustainability and then I'll... I led by peers who are better in the technical part, explain how we are actually using this to improve the way we work. But digital sustainability, as Shilin said, actually has four pillars, and NSVS works specifically with two of them. So the four pillars are going to be green IT, so how you make sure that your system are mindful of the impact they have in the environment, and you're going to be looking for improved performance, but also to make sure that The conditions. that you're offering are in line with the needs. So to give you an example, a client that might not be as educated in this kind of thing will tell you like, I need 100% availability for my lending rates. And you're like, well, in reality, you're only using your lending rates once a month to recalculate your interest rate for the loans ongoing. So maybe you don't need this. information available 100% of the time, 24-7. And probably some technical people will improve the way I explain this, but the idea is that you also need to dimension the services in line with the need of the services and not just with an idea of what it should be. Because as of today, and it is the same as personal views, like we want Amazon to deliver things in our door in an hour, but do we really need something in an hour to deliver every time? Not really. It's just like we have grown used to this way of working. So green IT is really about making sure that we design in a way that is in line with the needs, but that is mindful of reducing the impact that we have on the environment. Because as you know, it's not just about CO2. It is about water stress, as they were mentioning, and the impact that it will have in the communities. Because if we don't have enough water, we don't have enough food and all of those things. So that is one part. Then you have IT for green, but it's also preventive maintenance. and making sure that you update and upgrade in the right time and not just all the time ahead or after breakage, because that is also having an impact in the way we work. You have accessibility, which as Janine was saying, is not only about disability, but it's also about making sure that our software are available in low debit communities, because if you only have mobile bankings, you need to also be able to access your banking, even if you only have 3G or less. And it's also about if you're elderly and maybe you don't have the same access or capacity to use a phone, like how do you still get access to your banking services? And then you have your ethics pillar, which is about, and this is where it comes to what we're talking is about being mindful and thinking about how technology has an impact in the life of people. And not just thinking that because it's technology is going to be neutral and not going to have an impact. And that is where we're going to be able to. coming to things like an AI code of ethics and how much freedom we give to the AI or how we can think about what we are going to fit into the models in order to make sure that they respond to what we're expecting. And that's where I'm going to leave the floor to my peers who will be able to explain this even better than me.

  • Speaker #3

    I can talk about this from a technology management perspective when I did my PhD on this topic. And something that kept coming up is about the role that technology plays, right? So technology actually has certain affordances and we can think of them in this idea of the opportunities and constraints that we can design into the technologies themselves to get the outcomes we want. So for example, when we think about designing AI more inclusively, we can factor in different diverse voices, especially women, and know that. the technology will become more socially responsible and environmentally aware. And some of the things of those principles we can be thinking about when we are looking to reduce that gender bias and environmental degradation that's being perpetuated by AI systems today, we can be thinking about three ways or three techniques, so to speak. So first of all, we can think about how do we compensate? How do we change the technology to compensate for the harm that we're seeing? So we're using AI technologies, for example, to compensate for those negative outcomes of environmental degradation and gender bias by adding something positive. So it's about using technology to deliver something positive that's going to offset what's being negatively outcombed in that situation. So, for example. We may need to create data that has more gender balance than what was typically found in the data sets from the real world, right? So if we're missing data from women, we need to start creating synthetic data, maybe gathering more data from women to actually compensate for that situation. The second principle I can mention here is transformation. So how can we fundamentally change the technology, right? So the use of AI technologies to transfer. form negative outcomes and change that into something more positive. So using the technology at a more fundamental level to change what was that that was negative, right? So this bias we're talking about, the degradation of the environment. And we can design AI systems to fundamentally change the interaction between the AI systems and the users. So right now, users are actually really struggling with prompt engineering. It's hard to craft prompts. to limit the bias. when it comes to gender. It's very difficult. And the responses back also aren't providing the transparency needed for the users to understand their data sources. They're not understanding the interpretation of their prompts. And there's potential for bias to be creeping in at any stage. And users aren't seeing that. What is the bias? Where is it coming from? And so we can actually rethink the interaction entirely by giving users the tools that they need for prompt engineering and to enhance transparency so they can make decisions that will combat these issues. And we can transform these systems to add both enablers and constraints that would be useful for reducing gender bias and to help improve our ecological footprint. Now, the last principle I can mention here is counteraction. This is about the use of AI technologies to counteract negative outcomes. So we're using AI technology, for example, to prevent the occurrence of what is currently negative. So the bias again, and the degradation of the environment. So imagine in this situation, advanced AI systems that sit alongside developers and flag bias and recommend inclusive data sets and auto adjust models to reduce harm, like kind of like having an ethical co-pilot right next to you, right? that's going to evolve with every line of code and every prompt that you give. So those are some examples based on some of those technology management principles.

  • Speaker #2

    And as SBS design perspective from an inclusive design for banking software, we can bring in more universal design principles into the process itself. So for example, we can eliminate the biased assumptions and user flows. We could design with multiple personas that includes women entrepreneurs or gig workers or single parents and rural users. We could remove the gendered assumptions and product features. So, for example, we don't need to assume that only men are the primary bread earners or decision makers. We can use accessible interfaces and interactions for all abilities and literacies, financial literacy and technical literacy. For example, we can use plain. gender-neutral language. We can use visual aids that make it more inclusive. We provide readers, screen readers, text resizing, voice-assisted navigation. We can offer local language support, so especially in the regions where women or older users may be less fluent in the default language. And then we can make gender-aware credit or risk models, so the users are not penalized for gender or a marital status or a caregiving role. We can incorporate alternative data sources like the mobile payments, the community group behavior to assess the creditworthiness. So AI opens up the possibilities to make things more comprehensive and more inclusive because our processing power is immense and we can utilize that to make things more gender neutral.

  • Speaker #1

    One of the things that we can also do, and picking up on what Celine was saying before, as SBS, and that is something that we're currently working on, is also support our customers in better understanding these kind of things, because we have started this journey a couple of years ago, and although different people are at different rhythms, I think it is a unique opportunity that we have today to also have these discussions with our customers and help them be more mindful, and how they can use our technology in that sense, as Celine was saying. in trying to incorporate different ways within origination to make sure that they are more inclusive or reducing the idea that maybe the man is the sole breadwinner and things like that. So because of the role we play in our ecosystem, we also can be advisors and we can support this growth on our customer side. And it is something that we have started with some of our customers where we're giving digital sustainability. lessons or digital sustainability courses in order to be able to interpret and integrate these kind of things within their business processes and thinking.

  • Speaker #0

    Okay, so here is a bigger question. Can we create AI that's not just smart and efficient, but also good for people and the planet?

  • Speaker #1

    I think it's not so much about energy efficiency and responsibility. I will always go back to the same thing. We need to be aware of the problem and then we need to work to our solution. as long as we're using resources, as long as we're developing software, as long as we're consuming, we're still going to be using and consuming resources. But what matters is to think about the need. And for example, we do have within SBS a person who is responsible for studying the impact of AI in the environment and who we are creating a framework on how to make decisions in the way we implement today to make sure that we also stop the models before going too far because we do know that the further the AI model is trying to look for a better answer the higher the impact and the cost in energy and resources however the improvement becomes minimal as we move along so there are real questions about how to choose which model depending on which case we're using and how far we want to go in the research and all of those things are about understanding the problem, understanding our objective and trying to put the right level of solution. So it is about being in the right amount.

  • Speaker #2

    Equilibrium.

  • Speaker #1

    Thank you, Shadeen.

  • Speaker #2

    I would flip that question a little bit. So we should not say, is it possible? We should ask the question, how do we make it possible? Because it's increasingly essential to build AI systems that are both energy efficient and socially responsible. So we have to make deliberate design choices. We have to set up regulatory frameworks. And we have to have a shift in our mindset from the bigger and faster to what's fairer and smarter. So whether we're talking about optimizing the model architecture, as Bettina just talked about, or we are working into green AI training practices, or even edge computing on device AI, so that we save resources and the back and forth. We need to figure out solutions to make it. energy efficient for sure, because we know it's going to go big and it's going to get bigger and bigger as we move into the future.

  • Speaker #3

    Yeah, I would completely agree, Shaleen. And I think too, as we think about how do we reduce and remove bias in AI and the data, underlying data, we know that we need to use frameworks, we need to be auditing, we need to be retraining and retraining our models, right? All of these things are going to have an impact. on the environment. And the situation is once we start gaining these best practices and standards and we can apply them in an efficient way, that's really the best way to go about it is not to say, no, we've got to minimize how we're leveraging AI models and actually cut out things that are going to make it better, especially when it comes to reducing bias. But we have to be thinking smarter. and working smarter so that we do have less of an impact on the environment.

  • Speaker #0

    From a business point of view, why should companies actually care? Does tackling AI bias and going green really make business sense or is it just about doing the right thing?

  • Speaker #1

    As of today, there's a bigger interest. for our clients and this kind of things and they are numbers that shows that this is a question that is asked more and more in RFPs and RFIs. We also know that for example in Europe the CSRD which is the global CSR regulation at the moment is asking us to be accountable for all of the value chain which means our customers and our suppliers and from a business perspective being able to deliver. on these kind of things, being able to answer to this question and making sure that we have the right level of implementation in these kind of topics is going to allow us to have at least A, answer to the requirements, but also be able to have a differentiator in the type of products that we sell. We know that accessibility is a differentiator today in a lot of places. And for our customers, being able to propose these kind of services and being able to showcase that they're working toward this is also a differentiator. differentiator towards their customers. So it creates a value creation positive cycle that I think it is important to keep in mind. So in a very cynical and like money driven way of saying the world doing this actually make us more efficient, more performance. So we have a bigger profit because our products are spending less money in the way they work, but we're also getting more clients and more business. So from a business perspective, is very easy to justify. And I'm sure that Angeline will add some interesting points.

  • Speaker #2

    I fully agree. So energy efficient AI models, it lowers operational costs. It increases efficiency.

  • Speaker #0

    It's straight away good business. It's a leaner infrastructure. It's the optimization of storage, lower maintenance expenses. So yes, everything improves the operational costs. Plus, it's a competitive advantage and drives innovation because the moment you make a team that's focused on ethical AI and sustainability in the long run, you unlock creativity, you unlock future-proofing of solutions, and yeah, you get ahead essentially with diverse and inclusive. dreams. So there was a study with PCG. It shows that a 19% of innovation boost in companies with above average levels of diversity. So yes, it makes business sense. It makes innovation. It gives you a competitive advantage.

  • Speaker #1

    So what now? What do we do with all this? If you're a developer, a designer, or... it. making decisions that shape tech. How can you help build AI that is not just smarter, but also fairer and more sustainable?

  • Speaker #0

    So yes, it's a shared responsibility. And as people who are designing and developing AI solutions, we need to ensure that our systems are fair, transparent, unbiased. One, we need to use diverse teams and representative data sets. We need to regularly conduct the audits. We need to implement explainable AI so that we can cross-question it and understand how the decisions are being made. And we need to advocate for diversity in AI teams so that we're reducing blind spots.

  • Speaker #2

    Yeah, and AI certainly has a role too, exactly what Shalina is saying. And when we add on a layer of AI to do AI governance, for example, how can the technology itself be used. to all of these objectives that we've been talking about throughout this. I think that would be extremely powerful.

  • Speaker #3

    And I will just come back to what Shalit was saying in the beginning, which is like, we need to keep thinking that this is a tool. And as a tool, it is something that we need to govern. So we need to implement the framework. We need to have the right people working on it. And we need to make sure that we do the check that make us follow up and make sure that things are working the proper way. is really also our responsibility to make sure that this answers to our needs.

  • Speaker #1

    So to wrap things up, we've seen that building better AI isn't just a nice idea, it's something we have to do. And it's doable, so that's a good news, but it's just a start. So according to you, what will it take to keep pushing for technology that serves everyone equitably and responsibly?

  • Speaker #0

    So whether we like it or not, AI is going to shape the next generation's worldview. If it is biased, so is our future. So we have to tackle it, and we have to tackle it now. So building a world where people see limitless possibilities, regardless of their gender, caste, creed, color, anything, we need to make a fairer world. Because it's not just a technical flaw, it's a societal issue. And the question is not whether... AI can be fair. It's whether we have the will to make it so.

  • Speaker #2

    And I think this issue around governance and making sure that IT systems and AI systems are governed in the right way to get the right outcomes, as Shaleen's saying, that we do have a fair world and equitable outcomes. And all of these things is so critical and important. We want to see that governance standards continue to rise. and that they get implemented as best practice in every industry. So that's something that we're already seeing a lot of change on, which is great, but it's something that we have to keep pushing for and keep advocating for. And there's a role for everyone to play in this because AI touches everyone's lives, right? So all of us are responsible for getting our feedback back to companies, making sure we get the transparency we need, making sure that as systems progress, we are

  • Speaker #3

    we're being critical are we being treated fair are we pushing to create the world that we want to see i think to to highlight angeline's and dana i think we are all responsible for this so one of the aspects we haven't mentioned because we're talking really about the company but is about how everybody needs to be educated in this and how we need to try to make sure that every person whether it's ourselves our family our kids our community is aware that This is something we are all responsible for and that we keep advocating and expecting this from our companies, our governments, the schools where we are everywhere. We need to be, once again, and this is a word I use a lot, but we need to be mindful of the role we play in this game. Because if we are not asking the questions, if we are not being there, if we are not trying to change things and voice out the need that we have to. make this an important topic and have it on the table, it just gets thrown away with other things. And in a very cynical world where we are driven by financial market indicators, we also need internally to be able to continue proving that this brings value and that it makes sense. So that means knowing what we're following. It means having KPIs that we can showcase and being able to track these evolutions in order to make sure that this is taken seriously at every level, even though already... being there and voicing it out is an important part of the journey.

  • Speaker #1

    Thank you so much Bettina, Chaline and Dana for sharing your insights. It's been a very thoughtful conversation.

  • Speaker #0

    Thank you, Caroline.

  • Speaker #3

    Thank you. Yeah,

  • Speaker #2

    thank you so much.

  • Speaker #3

    Always so fun to be with you guys talking about this amazing topic.

  • Speaker #0

    Always.

Share

Embed

You may also like

Description

In this powerful episode, we dive into how gender bias in AI isn't just a tech problem—it's a sustainability issue, too. Join Bettina (Head of Sustainability at SBS), Dana (AI & Data Analytics Lead at SBS), and Shalien (Head of Product Design at SBS)—as they break down the hidden connections between inclusive technology and environmental impact : Why does AI often disadvantage women and marginalized communities? How can fairer algorithms also be greener? What are banks doing to design apps that really work for everyone?


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Transcription

  • Speaker #0

    Welcome to FinTrans, the podcast series where we explore the hot trends and news in the financial sector with experts. Today, we are unpacking a complex but crucial topic, gender bias in AI, and its unexpected connection to sustainability. And to explore this, I am joined by experts at SBS, Bettina Vaccaro-Carbon, Head of Sustainability, Chalene Kishore, Head of Product Design, and Danelle Lundbury, Head of Data. analytics and AI. Together, we will dive into how bias shows up in financial technology, how inclusive design can make a difference, and why building fairer, greener AI is not just the right thing to do, but also the smart thing. Before we get started, please everyone introduce yourself.

  • Speaker #1

    Hi, my name is Bettina. Thank you, Caroline, for having us today. We've already started this discussion a couple of months ago about our work in digital sustainability and sustainability as a whole. So my role at SVS is Head of Sustainability, which means that I cover all of the CSR topics within our company, and in particular, a topic that is quite dear to our heart, which is digital sustainability.

  • Speaker #2

    As for me, I'm Shaleen Kishore. I lead product design for SPS. I'm an architect by degree. I'm a creative at heart and a design leader by choice. And during the last decades of my career, I've worked with different kinds of businesses, as well as large global companies. I've traveled extensively. And professionally, I'm looking for the most frequent and most frustrating pain points to design solutions for them. Gender bias, is one such example. And in a patriarchal society, in a male-dominated professional environment, it becomes all the more important to address it and find solutions so that we do not take it forward as it has been in the past.

  • Speaker #3

    And I'm Dana Lenbury, and I'm currently leading data analytics and AI initiatives at SBS. So in my role, what I get to do is drive essential data initiatives that are helping our product lines and our clients leverage data and AI effectively. And I've been working in the financial services sector for nearly 20 years. And before joining SBS about three years ago, I led work in data science, utilizing AI and machine learning and statistics to generate business value and drive innovation for financial institutions. Really great to be on this podcast. Thank you for having me.

  • Speaker #0

    So let's start with the big picture. AI is becoming more embedded in our daily lives, from hiring to lending, and so do the risks of reinforcing old inequalities. But beyond fairness, there is a surprising link between bias and sustainability. Can you tell us how are these two connected?

  • Speaker #1

    From our sustainability... point of view, I think that is something that's something that we need to keep in mind and that we often miss is that sustainability is not just about the environment or being green or maintaining resources. It's also about how we create an economy that as a whole is going to be able to continue growing. And that is, it's not that it is inclusive, but being inclusive is part of the ways in which we support continuous growth and an economy that is going to be lasting in the long term. If you take the world. 50 years ago, and it's a bit difficult to say it in the current moment, but if you take it 50 years ago, we were excluding almost 50% of the population, which of course is going to have an impact in the way our economy can grow as a whole. And I feel like I say economy every two words, but the idea is that we cannot put to the side a significant part of a population. And of course, this is women in the bias that we're talking about today, but this is also about Um. other underserved communities. It can be about minorities in general. It is also about people with handicaps and things like that. So every time we exclude a part of the community, we are reducing our capacity to grow by limiting the resources we can actually harness.

  • Speaker #3

    When we're thinking about gender bias in AI, we can be talking about the ways in which the AI systems are producing unequal outcomes that are based on gender. So often this is unintentionally, but it has real world consequences. Now, when we start to think about the two topics together, so gender bias in AI and how does that actually relate to environmental sustainability, This is actually a very powerful yet. Overlook question a lot of times. So gender bias and AI environmental sustainability, they might seem like completely separate issues, but they're actually deeply interconnected, even at the root of where they come from. And we can see this through the lens of equity, impact and systemic design. So let me talk a little bit about the roots of these two issues. So the reality is that they share a certain. history when it comes to systemic inequality. So both gender bias in AI and environmental degradation are symptoms of systems designed without inclusive representation. So AI systems, they're often reflecting the values of the dominant group, often ignoring women and marginalized communities. In fact, the environmental costs of AI, so we can think about water depletion, energy use, all of these things are often falling on low-income or indigenous communities located near data centers or mining operations. These are communities that are frequently underrepresented in AI design and governance. And then when it comes to climate stress, so we can think about here food security and migration, the insecurity around food rather, this is often hitting women and other marginalized communities hardest. And AI systems that aren't accounting for this, they can actually amplify the vulnerability of these populations through biased resource allocation or the exclusion from support systems. So some of these connections are just so deeply rooted. I think it's really important that we think of them in that connection when we talk about it. And there's so much we can learn from thinking about that connection as well.

  • Speaker #2

    Thank you, Dana. Thank you, Bettina, for the comprehensive understanding of gender bias in AI. I'd like to highlight first that AI is very, very powerful. It's a powerful tool, just like television, like books, like movies, but it is only a tool. And we still need to use our wisdom, our critical thinking to utilize information and stories that are coming from all the sources, including AI. Now, if we talk about financial technology, The finance, over 70% women have bank accounts, only one in three actively use their accounts. So a majority of women access banking services via men in their family. Financial literacy is low and so is technical literacy. It's not promoted in women in any geography. And as an example, women are 8% less likely than men to own a mobile phone, especially in low and middle income countries. Even though computing is evolving with machine learning and generative AI, the problems that are coming with that data that we have been gathering so far, it's still biased and it's high time that we take corrective measures. Because the way we're going to shape technology today, it'll shape the behavior, the routines, the values in future. And we want to make it good. We want to make it sustainable. We want to make it scalable.

  • Speaker #0

    Before we can tackle the issue, we need to understand it. So what do we mean by gender bias in AI and how does it manifest, particularly in sectors like banking and fintech, where data and algorithms increasingly shape decisions?

  • Speaker #3

    Yeah, so when we're talking about gender bias in AI, we're thinking about the ways in which the AI systems are. producing and reproducing unequal outcomes based on gender. And this is often with unintentional consequences that's happening in the real world. So what's happening in financial institutions and across the industry, it's something that we see matters very deeply because we are using AI to make decisions about so many things. So we can be thinking about the credit scoring. The loan approvals, insurance pricing, fraud detection, all of these things and decisions that are directly impacting people's everyday lives and, of course, their financial futures. The problem related to this bias that we're seeing. is really starting with the data. So if AI is trained on historical financial data, it's reflecting past discrimination. So say, for example, women historically have been granted smaller loans or have had limited access to credit. And in some countries, it's still illegal. So lots of discrimination throughout the ages in all countries, really. Then because of this data and the slant that we see in the data, We know that AI is learning these patterns and they're replicating them, even if gender isn't explicitly included as a variable in the data set. So, for example, there was a well-known case a few years ago where women were being offered dramatically lower credit limits than men, even when they had a similar financial profile. And that's really a great example, I would say, of bias showing up in a seemingly neutral algorithm. It's also showing up in more subtle ways as well. Behavioral data is there, things like shopping behaviors and patterns and even browser history. These can all carry gendered assumptions that end up influencing credit decisions. So investment platforms might unknowingly, for example, cater more to a male risk profile and then leave women underserved. And if the data is used to train these systems, If that data is underrepresenting women or non-binary individuals, the models just aren't going to perform as well for them. That's not just a technical issue, but it's really an inclusion issue.

  • Speaker #1

    I think what is interesting and important to keep in mind is that, as Dana was saying, there are two aspects in which this is going to be impacting women and minorities in general. One is that in the decisions that are made. as we believe that going through AI is going to take out the human bias. We're actually leaving more space to bias based on historical data and decisions that were made previously with a different view of things, but also because there is those blind angles and the impact that it has on the side, because as Shalini was saying, women are less likely to have mobile phones, are less likely to be active in their financial life, which means that when you take both of this aspect you're just increasing the risk of putting people to the side or not being inclusive enough and once again the risk is not having the capacity to solve today's problems by looking in a world in in an historical way instead of like changing or triggering that movement if we add another thing because we are obviously talking about the use of financial tools but the ai is is not only used within the financial tour, it's also used. for recruitment and it's also going to be used for trying to train and do financial literacy. So it really impacts the whole chain of value if we don't try to think about how we can change these kind of things.

  • Speaker #2

    Just two cents from my side. So at SPS, we have the digital sustainability pillars of accessibility and eco-design. And accessibility is not just about disability, it's also about how can different genders people of different age groups and digital literacy access the same systems. So, for example, high-pitched voices or ethnic accents, it's difficult for the voice recognition systems to recognize them. Or sometimes even there are higher error rates for women and people of color on facial recognition because the data itself, the data collection, the data curation, it was imbalanced. And that's where we need to start solving, not just from a technical perspective, but from a human perspective.

  • Speaker #0

    Could the way we build and run our digital tools actually make them fairer? In other words, can digital sustainability help us fight bias in AI systems?

  • Speaker #1

    I will start by re-explaining a little bit what is digital sustainability and then I'll... I led by peers who are better in the technical part, explain how we are actually using this to improve the way we work. But digital sustainability, as Shilin said, actually has four pillars, and NSVS works specifically with two of them. So the four pillars are going to be green IT, so how you make sure that your system are mindful of the impact they have in the environment, and you're going to be looking for improved performance, but also to make sure that The conditions. that you're offering are in line with the needs. So to give you an example, a client that might not be as educated in this kind of thing will tell you like, I need 100% availability for my lending rates. And you're like, well, in reality, you're only using your lending rates once a month to recalculate your interest rate for the loans ongoing. So maybe you don't need this. information available 100% of the time, 24-7. And probably some technical people will improve the way I explain this, but the idea is that you also need to dimension the services in line with the need of the services and not just with an idea of what it should be. Because as of today, and it is the same as personal views, like we want Amazon to deliver things in our door in an hour, but do we really need something in an hour to deliver every time? Not really. It's just like we have grown used to this way of working. So green IT is really about making sure that we design in a way that is in line with the needs, but that is mindful of reducing the impact that we have on the environment. Because as you know, it's not just about CO2. It is about water stress, as they were mentioning, and the impact that it will have in the communities. Because if we don't have enough water, we don't have enough food and all of those things. So that is one part. Then you have IT for green, but it's also preventive maintenance. and making sure that you update and upgrade in the right time and not just all the time ahead or after breakage, because that is also having an impact in the way we work. You have accessibility, which as Janine was saying, is not only about disability, but it's also about making sure that our software are available in low debit communities, because if you only have mobile bankings, you need to also be able to access your banking, even if you only have 3G or less. And it's also about if you're elderly and maybe you don't have the same access or capacity to use a phone, like how do you still get access to your banking services? And then you have your ethics pillar, which is about, and this is where it comes to what we're talking is about being mindful and thinking about how technology has an impact in the life of people. And not just thinking that because it's technology is going to be neutral and not going to have an impact. And that is where we're going to be able to. coming to things like an AI code of ethics and how much freedom we give to the AI or how we can think about what we are going to fit into the models in order to make sure that they respond to what we're expecting. And that's where I'm going to leave the floor to my peers who will be able to explain this even better than me.

  • Speaker #3

    I can talk about this from a technology management perspective when I did my PhD on this topic. And something that kept coming up is about the role that technology plays, right? So technology actually has certain affordances and we can think of them in this idea of the opportunities and constraints that we can design into the technologies themselves to get the outcomes we want. So for example, when we think about designing AI more inclusively, we can factor in different diverse voices, especially women, and know that. the technology will become more socially responsible and environmentally aware. And some of the things of those principles we can be thinking about when we are looking to reduce that gender bias and environmental degradation that's being perpetuated by AI systems today, we can be thinking about three ways or three techniques, so to speak. So first of all, we can think about how do we compensate? How do we change the technology to compensate for the harm that we're seeing? So we're using AI technologies, for example, to compensate for those negative outcomes of environmental degradation and gender bias by adding something positive. So it's about using technology to deliver something positive that's going to offset what's being negatively outcombed in that situation. So, for example. We may need to create data that has more gender balance than what was typically found in the data sets from the real world, right? So if we're missing data from women, we need to start creating synthetic data, maybe gathering more data from women to actually compensate for that situation. The second principle I can mention here is transformation. So how can we fundamentally change the technology, right? So the use of AI technologies to transfer. form negative outcomes and change that into something more positive. So using the technology at a more fundamental level to change what was that that was negative, right? So this bias we're talking about, the degradation of the environment. And we can design AI systems to fundamentally change the interaction between the AI systems and the users. So right now, users are actually really struggling with prompt engineering. It's hard to craft prompts. to limit the bias. when it comes to gender. It's very difficult. And the responses back also aren't providing the transparency needed for the users to understand their data sources. They're not understanding the interpretation of their prompts. And there's potential for bias to be creeping in at any stage. And users aren't seeing that. What is the bias? Where is it coming from? And so we can actually rethink the interaction entirely by giving users the tools that they need for prompt engineering and to enhance transparency so they can make decisions that will combat these issues. And we can transform these systems to add both enablers and constraints that would be useful for reducing gender bias and to help improve our ecological footprint. Now, the last principle I can mention here is counteraction. This is about the use of AI technologies to counteract negative outcomes. So we're using AI technology, for example, to prevent the occurrence of what is currently negative. So the bias again, and the degradation of the environment. So imagine in this situation, advanced AI systems that sit alongside developers and flag bias and recommend inclusive data sets and auto adjust models to reduce harm, like kind of like having an ethical co-pilot right next to you, right? that's going to evolve with every line of code and every prompt that you give. So those are some examples based on some of those technology management principles.

  • Speaker #2

    And as SBS design perspective from an inclusive design for banking software, we can bring in more universal design principles into the process itself. So for example, we can eliminate the biased assumptions and user flows. We could design with multiple personas that includes women entrepreneurs or gig workers or single parents and rural users. We could remove the gendered assumptions and product features. So, for example, we don't need to assume that only men are the primary bread earners or decision makers. We can use accessible interfaces and interactions for all abilities and literacies, financial literacy and technical literacy. For example, we can use plain. gender-neutral language. We can use visual aids that make it more inclusive. We provide readers, screen readers, text resizing, voice-assisted navigation. We can offer local language support, so especially in the regions where women or older users may be less fluent in the default language. And then we can make gender-aware credit or risk models, so the users are not penalized for gender or a marital status or a caregiving role. We can incorporate alternative data sources like the mobile payments, the community group behavior to assess the creditworthiness. So AI opens up the possibilities to make things more comprehensive and more inclusive because our processing power is immense and we can utilize that to make things more gender neutral.

  • Speaker #1

    One of the things that we can also do, and picking up on what Celine was saying before, as SBS, and that is something that we're currently working on, is also support our customers in better understanding these kind of things, because we have started this journey a couple of years ago, and although different people are at different rhythms, I think it is a unique opportunity that we have today to also have these discussions with our customers and help them be more mindful, and how they can use our technology in that sense, as Celine was saying. in trying to incorporate different ways within origination to make sure that they are more inclusive or reducing the idea that maybe the man is the sole breadwinner and things like that. So because of the role we play in our ecosystem, we also can be advisors and we can support this growth on our customer side. And it is something that we have started with some of our customers where we're giving digital sustainability. lessons or digital sustainability courses in order to be able to interpret and integrate these kind of things within their business processes and thinking.

  • Speaker #0

    Okay, so here is a bigger question. Can we create AI that's not just smart and efficient, but also good for people and the planet?

  • Speaker #1

    I think it's not so much about energy efficiency and responsibility. I will always go back to the same thing. We need to be aware of the problem and then we need to work to our solution. as long as we're using resources, as long as we're developing software, as long as we're consuming, we're still going to be using and consuming resources. But what matters is to think about the need. And for example, we do have within SBS a person who is responsible for studying the impact of AI in the environment and who we are creating a framework on how to make decisions in the way we implement today to make sure that we also stop the models before going too far because we do know that the further the AI model is trying to look for a better answer the higher the impact and the cost in energy and resources however the improvement becomes minimal as we move along so there are real questions about how to choose which model depending on which case we're using and how far we want to go in the research and all of those things are about understanding the problem, understanding our objective and trying to put the right level of solution. So it is about being in the right amount.

  • Speaker #2

    Equilibrium.

  • Speaker #1

    Thank you, Shadeen.

  • Speaker #2

    I would flip that question a little bit. So we should not say, is it possible? We should ask the question, how do we make it possible? Because it's increasingly essential to build AI systems that are both energy efficient and socially responsible. So we have to make deliberate design choices. We have to set up regulatory frameworks. And we have to have a shift in our mindset from the bigger and faster to what's fairer and smarter. So whether we're talking about optimizing the model architecture, as Bettina just talked about, or we are working into green AI training practices, or even edge computing on device AI, so that we save resources and the back and forth. We need to figure out solutions to make it. energy efficient for sure, because we know it's going to go big and it's going to get bigger and bigger as we move into the future.

  • Speaker #3

    Yeah, I would completely agree, Shaleen. And I think too, as we think about how do we reduce and remove bias in AI and the data, underlying data, we know that we need to use frameworks, we need to be auditing, we need to be retraining and retraining our models, right? All of these things are going to have an impact. on the environment. And the situation is once we start gaining these best practices and standards and we can apply them in an efficient way, that's really the best way to go about it is not to say, no, we've got to minimize how we're leveraging AI models and actually cut out things that are going to make it better, especially when it comes to reducing bias. But we have to be thinking smarter. and working smarter so that we do have less of an impact on the environment.

  • Speaker #0

    From a business point of view, why should companies actually care? Does tackling AI bias and going green really make business sense or is it just about doing the right thing?

  • Speaker #1

    As of today, there's a bigger interest. for our clients and this kind of things and they are numbers that shows that this is a question that is asked more and more in RFPs and RFIs. We also know that for example in Europe the CSRD which is the global CSR regulation at the moment is asking us to be accountable for all of the value chain which means our customers and our suppliers and from a business perspective being able to deliver. on these kind of things, being able to answer to this question and making sure that we have the right level of implementation in these kind of topics is going to allow us to have at least A, answer to the requirements, but also be able to have a differentiator in the type of products that we sell. We know that accessibility is a differentiator today in a lot of places. And for our customers, being able to propose these kind of services and being able to showcase that they're working toward this is also a differentiator. differentiator towards their customers. So it creates a value creation positive cycle that I think it is important to keep in mind. So in a very cynical and like money driven way of saying the world doing this actually make us more efficient, more performance. So we have a bigger profit because our products are spending less money in the way they work, but we're also getting more clients and more business. So from a business perspective, is very easy to justify. And I'm sure that Angeline will add some interesting points.

  • Speaker #2

    I fully agree. So energy efficient AI models, it lowers operational costs. It increases efficiency.

  • Speaker #0

    It's straight away good business. It's a leaner infrastructure. It's the optimization of storage, lower maintenance expenses. So yes, everything improves the operational costs. Plus, it's a competitive advantage and drives innovation because the moment you make a team that's focused on ethical AI and sustainability in the long run, you unlock creativity, you unlock future-proofing of solutions, and yeah, you get ahead essentially with diverse and inclusive. dreams. So there was a study with PCG. It shows that a 19% of innovation boost in companies with above average levels of diversity. So yes, it makes business sense. It makes innovation. It gives you a competitive advantage.

  • Speaker #1

    So what now? What do we do with all this? If you're a developer, a designer, or... it. making decisions that shape tech. How can you help build AI that is not just smarter, but also fairer and more sustainable?

  • Speaker #0

    So yes, it's a shared responsibility. And as people who are designing and developing AI solutions, we need to ensure that our systems are fair, transparent, unbiased. One, we need to use diverse teams and representative data sets. We need to regularly conduct the audits. We need to implement explainable AI so that we can cross-question it and understand how the decisions are being made. And we need to advocate for diversity in AI teams so that we're reducing blind spots.

  • Speaker #2

    Yeah, and AI certainly has a role too, exactly what Shalina is saying. And when we add on a layer of AI to do AI governance, for example, how can the technology itself be used. to all of these objectives that we've been talking about throughout this. I think that would be extremely powerful.

  • Speaker #3

    And I will just come back to what Shalit was saying in the beginning, which is like, we need to keep thinking that this is a tool. And as a tool, it is something that we need to govern. So we need to implement the framework. We need to have the right people working on it. And we need to make sure that we do the check that make us follow up and make sure that things are working the proper way. is really also our responsibility to make sure that this answers to our needs.

  • Speaker #1

    So to wrap things up, we've seen that building better AI isn't just a nice idea, it's something we have to do. And it's doable, so that's a good news, but it's just a start. So according to you, what will it take to keep pushing for technology that serves everyone equitably and responsibly?

  • Speaker #0

    So whether we like it or not, AI is going to shape the next generation's worldview. If it is biased, so is our future. So we have to tackle it, and we have to tackle it now. So building a world where people see limitless possibilities, regardless of their gender, caste, creed, color, anything, we need to make a fairer world. Because it's not just a technical flaw, it's a societal issue. And the question is not whether... AI can be fair. It's whether we have the will to make it so.

  • Speaker #2

    And I think this issue around governance and making sure that IT systems and AI systems are governed in the right way to get the right outcomes, as Shaleen's saying, that we do have a fair world and equitable outcomes. And all of these things is so critical and important. We want to see that governance standards continue to rise. and that they get implemented as best practice in every industry. So that's something that we're already seeing a lot of change on, which is great, but it's something that we have to keep pushing for and keep advocating for. And there's a role for everyone to play in this because AI touches everyone's lives, right? So all of us are responsible for getting our feedback back to companies, making sure we get the transparency we need, making sure that as systems progress, we are

  • Speaker #3

    we're being critical are we being treated fair are we pushing to create the world that we want to see i think to to highlight angeline's and dana i think we are all responsible for this so one of the aspects we haven't mentioned because we're talking really about the company but is about how everybody needs to be educated in this and how we need to try to make sure that every person whether it's ourselves our family our kids our community is aware that This is something we are all responsible for and that we keep advocating and expecting this from our companies, our governments, the schools where we are everywhere. We need to be, once again, and this is a word I use a lot, but we need to be mindful of the role we play in this game. Because if we are not asking the questions, if we are not being there, if we are not trying to change things and voice out the need that we have to. make this an important topic and have it on the table, it just gets thrown away with other things. And in a very cynical world where we are driven by financial market indicators, we also need internally to be able to continue proving that this brings value and that it makes sense. So that means knowing what we're following. It means having KPIs that we can showcase and being able to track these evolutions in order to make sure that this is taken seriously at every level, even though already... being there and voicing it out is an important part of the journey.

  • Speaker #1

    Thank you so much Bettina, Chaline and Dana for sharing your insights. It's been a very thoughtful conversation.

  • Speaker #0

    Thank you, Caroline.

  • Speaker #3

    Thank you. Yeah,

  • Speaker #2

    thank you so much.

  • Speaker #3

    Always so fun to be with you guys talking about this amazing topic.

  • Speaker #0

    Always.

Description

In this powerful episode, we dive into how gender bias in AI isn't just a tech problem—it's a sustainability issue, too. Join Bettina (Head of Sustainability at SBS), Dana (AI & Data Analytics Lead at SBS), and Shalien (Head of Product Design at SBS)—as they break down the hidden connections between inclusive technology and environmental impact : Why does AI often disadvantage women and marginalized communities? How can fairer algorithms also be greener? What are banks doing to design apps that really work for everyone?


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Transcription

  • Speaker #0

    Welcome to FinTrans, the podcast series where we explore the hot trends and news in the financial sector with experts. Today, we are unpacking a complex but crucial topic, gender bias in AI, and its unexpected connection to sustainability. And to explore this, I am joined by experts at SBS, Bettina Vaccaro-Carbon, Head of Sustainability, Chalene Kishore, Head of Product Design, and Danelle Lundbury, Head of Data. analytics and AI. Together, we will dive into how bias shows up in financial technology, how inclusive design can make a difference, and why building fairer, greener AI is not just the right thing to do, but also the smart thing. Before we get started, please everyone introduce yourself.

  • Speaker #1

    Hi, my name is Bettina. Thank you, Caroline, for having us today. We've already started this discussion a couple of months ago about our work in digital sustainability and sustainability as a whole. So my role at SVS is Head of Sustainability, which means that I cover all of the CSR topics within our company, and in particular, a topic that is quite dear to our heart, which is digital sustainability.

  • Speaker #2

    As for me, I'm Shaleen Kishore. I lead product design for SPS. I'm an architect by degree. I'm a creative at heart and a design leader by choice. And during the last decades of my career, I've worked with different kinds of businesses, as well as large global companies. I've traveled extensively. And professionally, I'm looking for the most frequent and most frustrating pain points to design solutions for them. Gender bias, is one such example. And in a patriarchal society, in a male-dominated professional environment, it becomes all the more important to address it and find solutions so that we do not take it forward as it has been in the past.

  • Speaker #3

    And I'm Dana Lenbury, and I'm currently leading data analytics and AI initiatives at SBS. So in my role, what I get to do is drive essential data initiatives that are helping our product lines and our clients leverage data and AI effectively. And I've been working in the financial services sector for nearly 20 years. And before joining SBS about three years ago, I led work in data science, utilizing AI and machine learning and statistics to generate business value and drive innovation for financial institutions. Really great to be on this podcast. Thank you for having me.

  • Speaker #0

    So let's start with the big picture. AI is becoming more embedded in our daily lives, from hiring to lending, and so do the risks of reinforcing old inequalities. But beyond fairness, there is a surprising link between bias and sustainability. Can you tell us how are these two connected?

  • Speaker #1

    From our sustainability... point of view, I think that is something that's something that we need to keep in mind and that we often miss is that sustainability is not just about the environment or being green or maintaining resources. It's also about how we create an economy that as a whole is going to be able to continue growing. And that is, it's not that it is inclusive, but being inclusive is part of the ways in which we support continuous growth and an economy that is going to be lasting in the long term. If you take the world. 50 years ago, and it's a bit difficult to say it in the current moment, but if you take it 50 years ago, we were excluding almost 50% of the population, which of course is going to have an impact in the way our economy can grow as a whole. And I feel like I say economy every two words, but the idea is that we cannot put to the side a significant part of a population. And of course, this is women in the bias that we're talking about today, but this is also about Um. other underserved communities. It can be about minorities in general. It is also about people with handicaps and things like that. So every time we exclude a part of the community, we are reducing our capacity to grow by limiting the resources we can actually harness.

  • Speaker #3

    When we're thinking about gender bias in AI, we can be talking about the ways in which the AI systems are producing unequal outcomes that are based on gender. So often this is unintentionally, but it has real world consequences. Now, when we start to think about the two topics together, so gender bias in AI and how does that actually relate to environmental sustainability, This is actually a very powerful yet. Overlook question a lot of times. So gender bias and AI environmental sustainability, they might seem like completely separate issues, but they're actually deeply interconnected, even at the root of where they come from. And we can see this through the lens of equity, impact and systemic design. So let me talk a little bit about the roots of these two issues. So the reality is that they share a certain. history when it comes to systemic inequality. So both gender bias in AI and environmental degradation are symptoms of systems designed without inclusive representation. So AI systems, they're often reflecting the values of the dominant group, often ignoring women and marginalized communities. In fact, the environmental costs of AI, so we can think about water depletion, energy use, all of these things are often falling on low-income or indigenous communities located near data centers or mining operations. These are communities that are frequently underrepresented in AI design and governance. And then when it comes to climate stress, so we can think about here food security and migration, the insecurity around food rather, this is often hitting women and other marginalized communities hardest. And AI systems that aren't accounting for this, they can actually amplify the vulnerability of these populations through biased resource allocation or the exclusion from support systems. So some of these connections are just so deeply rooted. I think it's really important that we think of them in that connection when we talk about it. And there's so much we can learn from thinking about that connection as well.

  • Speaker #2

    Thank you, Dana. Thank you, Bettina, for the comprehensive understanding of gender bias in AI. I'd like to highlight first that AI is very, very powerful. It's a powerful tool, just like television, like books, like movies, but it is only a tool. And we still need to use our wisdom, our critical thinking to utilize information and stories that are coming from all the sources, including AI. Now, if we talk about financial technology, The finance, over 70% women have bank accounts, only one in three actively use their accounts. So a majority of women access banking services via men in their family. Financial literacy is low and so is technical literacy. It's not promoted in women in any geography. And as an example, women are 8% less likely than men to own a mobile phone, especially in low and middle income countries. Even though computing is evolving with machine learning and generative AI, the problems that are coming with that data that we have been gathering so far, it's still biased and it's high time that we take corrective measures. Because the way we're going to shape technology today, it'll shape the behavior, the routines, the values in future. And we want to make it good. We want to make it sustainable. We want to make it scalable.

  • Speaker #0

    Before we can tackle the issue, we need to understand it. So what do we mean by gender bias in AI and how does it manifest, particularly in sectors like banking and fintech, where data and algorithms increasingly shape decisions?

  • Speaker #3

    Yeah, so when we're talking about gender bias in AI, we're thinking about the ways in which the AI systems are. producing and reproducing unequal outcomes based on gender. And this is often with unintentional consequences that's happening in the real world. So what's happening in financial institutions and across the industry, it's something that we see matters very deeply because we are using AI to make decisions about so many things. So we can be thinking about the credit scoring. The loan approvals, insurance pricing, fraud detection, all of these things and decisions that are directly impacting people's everyday lives and, of course, their financial futures. The problem related to this bias that we're seeing. is really starting with the data. So if AI is trained on historical financial data, it's reflecting past discrimination. So say, for example, women historically have been granted smaller loans or have had limited access to credit. And in some countries, it's still illegal. So lots of discrimination throughout the ages in all countries, really. Then because of this data and the slant that we see in the data, We know that AI is learning these patterns and they're replicating them, even if gender isn't explicitly included as a variable in the data set. So, for example, there was a well-known case a few years ago where women were being offered dramatically lower credit limits than men, even when they had a similar financial profile. And that's really a great example, I would say, of bias showing up in a seemingly neutral algorithm. It's also showing up in more subtle ways as well. Behavioral data is there, things like shopping behaviors and patterns and even browser history. These can all carry gendered assumptions that end up influencing credit decisions. So investment platforms might unknowingly, for example, cater more to a male risk profile and then leave women underserved. And if the data is used to train these systems, If that data is underrepresenting women or non-binary individuals, the models just aren't going to perform as well for them. That's not just a technical issue, but it's really an inclusion issue.

  • Speaker #1

    I think what is interesting and important to keep in mind is that, as Dana was saying, there are two aspects in which this is going to be impacting women and minorities in general. One is that in the decisions that are made. as we believe that going through AI is going to take out the human bias. We're actually leaving more space to bias based on historical data and decisions that were made previously with a different view of things, but also because there is those blind angles and the impact that it has on the side, because as Shalini was saying, women are less likely to have mobile phones, are less likely to be active in their financial life, which means that when you take both of this aspect you're just increasing the risk of putting people to the side or not being inclusive enough and once again the risk is not having the capacity to solve today's problems by looking in a world in in an historical way instead of like changing or triggering that movement if we add another thing because we are obviously talking about the use of financial tools but the ai is is not only used within the financial tour, it's also used. for recruitment and it's also going to be used for trying to train and do financial literacy. So it really impacts the whole chain of value if we don't try to think about how we can change these kind of things.

  • Speaker #2

    Just two cents from my side. So at SPS, we have the digital sustainability pillars of accessibility and eco-design. And accessibility is not just about disability, it's also about how can different genders people of different age groups and digital literacy access the same systems. So, for example, high-pitched voices or ethnic accents, it's difficult for the voice recognition systems to recognize them. Or sometimes even there are higher error rates for women and people of color on facial recognition because the data itself, the data collection, the data curation, it was imbalanced. And that's where we need to start solving, not just from a technical perspective, but from a human perspective.

  • Speaker #0

    Could the way we build and run our digital tools actually make them fairer? In other words, can digital sustainability help us fight bias in AI systems?

  • Speaker #1

    I will start by re-explaining a little bit what is digital sustainability and then I'll... I led by peers who are better in the technical part, explain how we are actually using this to improve the way we work. But digital sustainability, as Shilin said, actually has four pillars, and NSVS works specifically with two of them. So the four pillars are going to be green IT, so how you make sure that your system are mindful of the impact they have in the environment, and you're going to be looking for improved performance, but also to make sure that The conditions. that you're offering are in line with the needs. So to give you an example, a client that might not be as educated in this kind of thing will tell you like, I need 100% availability for my lending rates. And you're like, well, in reality, you're only using your lending rates once a month to recalculate your interest rate for the loans ongoing. So maybe you don't need this. information available 100% of the time, 24-7. And probably some technical people will improve the way I explain this, but the idea is that you also need to dimension the services in line with the need of the services and not just with an idea of what it should be. Because as of today, and it is the same as personal views, like we want Amazon to deliver things in our door in an hour, but do we really need something in an hour to deliver every time? Not really. It's just like we have grown used to this way of working. So green IT is really about making sure that we design in a way that is in line with the needs, but that is mindful of reducing the impact that we have on the environment. Because as you know, it's not just about CO2. It is about water stress, as they were mentioning, and the impact that it will have in the communities. Because if we don't have enough water, we don't have enough food and all of those things. So that is one part. Then you have IT for green, but it's also preventive maintenance. and making sure that you update and upgrade in the right time and not just all the time ahead or after breakage, because that is also having an impact in the way we work. You have accessibility, which as Janine was saying, is not only about disability, but it's also about making sure that our software are available in low debit communities, because if you only have mobile bankings, you need to also be able to access your banking, even if you only have 3G or less. And it's also about if you're elderly and maybe you don't have the same access or capacity to use a phone, like how do you still get access to your banking services? And then you have your ethics pillar, which is about, and this is where it comes to what we're talking is about being mindful and thinking about how technology has an impact in the life of people. And not just thinking that because it's technology is going to be neutral and not going to have an impact. And that is where we're going to be able to. coming to things like an AI code of ethics and how much freedom we give to the AI or how we can think about what we are going to fit into the models in order to make sure that they respond to what we're expecting. And that's where I'm going to leave the floor to my peers who will be able to explain this even better than me.

  • Speaker #3

    I can talk about this from a technology management perspective when I did my PhD on this topic. And something that kept coming up is about the role that technology plays, right? So technology actually has certain affordances and we can think of them in this idea of the opportunities and constraints that we can design into the technologies themselves to get the outcomes we want. So for example, when we think about designing AI more inclusively, we can factor in different diverse voices, especially women, and know that. the technology will become more socially responsible and environmentally aware. And some of the things of those principles we can be thinking about when we are looking to reduce that gender bias and environmental degradation that's being perpetuated by AI systems today, we can be thinking about three ways or three techniques, so to speak. So first of all, we can think about how do we compensate? How do we change the technology to compensate for the harm that we're seeing? So we're using AI technologies, for example, to compensate for those negative outcomes of environmental degradation and gender bias by adding something positive. So it's about using technology to deliver something positive that's going to offset what's being negatively outcombed in that situation. So, for example. We may need to create data that has more gender balance than what was typically found in the data sets from the real world, right? So if we're missing data from women, we need to start creating synthetic data, maybe gathering more data from women to actually compensate for that situation. The second principle I can mention here is transformation. So how can we fundamentally change the technology, right? So the use of AI technologies to transfer. form negative outcomes and change that into something more positive. So using the technology at a more fundamental level to change what was that that was negative, right? So this bias we're talking about, the degradation of the environment. And we can design AI systems to fundamentally change the interaction between the AI systems and the users. So right now, users are actually really struggling with prompt engineering. It's hard to craft prompts. to limit the bias. when it comes to gender. It's very difficult. And the responses back also aren't providing the transparency needed for the users to understand their data sources. They're not understanding the interpretation of their prompts. And there's potential for bias to be creeping in at any stage. And users aren't seeing that. What is the bias? Where is it coming from? And so we can actually rethink the interaction entirely by giving users the tools that they need for prompt engineering and to enhance transparency so they can make decisions that will combat these issues. And we can transform these systems to add both enablers and constraints that would be useful for reducing gender bias and to help improve our ecological footprint. Now, the last principle I can mention here is counteraction. This is about the use of AI technologies to counteract negative outcomes. So we're using AI technology, for example, to prevent the occurrence of what is currently negative. So the bias again, and the degradation of the environment. So imagine in this situation, advanced AI systems that sit alongside developers and flag bias and recommend inclusive data sets and auto adjust models to reduce harm, like kind of like having an ethical co-pilot right next to you, right? that's going to evolve with every line of code and every prompt that you give. So those are some examples based on some of those technology management principles.

  • Speaker #2

    And as SBS design perspective from an inclusive design for banking software, we can bring in more universal design principles into the process itself. So for example, we can eliminate the biased assumptions and user flows. We could design with multiple personas that includes women entrepreneurs or gig workers or single parents and rural users. We could remove the gendered assumptions and product features. So, for example, we don't need to assume that only men are the primary bread earners or decision makers. We can use accessible interfaces and interactions for all abilities and literacies, financial literacy and technical literacy. For example, we can use plain. gender-neutral language. We can use visual aids that make it more inclusive. We provide readers, screen readers, text resizing, voice-assisted navigation. We can offer local language support, so especially in the regions where women or older users may be less fluent in the default language. And then we can make gender-aware credit or risk models, so the users are not penalized for gender or a marital status or a caregiving role. We can incorporate alternative data sources like the mobile payments, the community group behavior to assess the creditworthiness. So AI opens up the possibilities to make things more comprehensive and more inclusive because our processing power is immense and we can utilize that to make things more gender neutral.

  • Speaker #1

    One of the things that we can also do, and picking up on what Celine was saying before, as SBS, and that is something that we're currently working on, is also support our customers in better understanding these kind of things, because we have started this journey a couple of years ago, and although different people are at different rhythms, I think it is a unique opportunity that we have today to also have these discussions with our customers and help them be more mindful, and how they can use our technology in that sense, as Celine was saying. in trying to incorporate different ways within origination to make sure that they are more inclusive or reducing the idea that maybe the man is the sole breadwinner and things like that. So because of the role we play in our ecosystem, we also can be advisors and we can support this growth on our customer side. And it is something that we have started with some of our customers where we're giving digital sustainability. lessons or digital sustainability courses in order to be able to interpret and integrate these kind of things within their business processes and thinking.

  • Speaker #0

    Okay, so here is a bigger question. Can we create AI that's not just smart and efficient, but also good for people and the planet?

  • Speaker #1

    I think it's not so much about energy efficiency and responsibility. I will always go back to the same thing. We need to be aware of the problem and then we need to work to our solution. as long as we're using resources, as long as we're developing software, as long as we're consuming, we're still going to be using and consuming resources. But what matters is to think about the need. And for example, we do have within SBS a person who is responsible for studying the impact of AI in the environment and who we are creating a framework on how to make decisions in the way we implement today to make sure that we also stop the models before going too far because we do know that the further the AI model is trying to look for a better answer the higher the impact and the cost in energy and resources however the improvement becomes minimal as we move along so there are real questions about how to choose which model depending on which case we're using and how far we want to go in the research and all of those things are about understanding the problem, understanding our objective and trying to put the right level of solution. So it is about being in the right amount.

  • Speaker #2

    Equilibrium.

  • Speaker #1

    Thank you, Shadeen.

  • Speaker #2

    I would flip that question a little bit. So we should not say, is it possible? We should ask the question, how do we make it possible? Because it's increasingly essential to build AI systems that are both energy efficient and socially responsible. So we have to make deliberate design choices. We have to set up regulatory frameworks. And we have to have a shift in our mindset from the bigger and faster to what's fairer and smarter. So whether we're talking about optimizing the model architecture, as Bettina just talked about, or we are working into green AI training practices, or even edge computing on device AI, so that we save resources and the back and forth. We need to figure out solutions to make it. energy efficient for sure, because we know it's going to go big and it's going to get bigger and bigger as we move into the future.

  • Speaker #3

    Yeah, I would completely agree, Shaleen. And I think too, as we think about how do we reduce and remove bias in AI and the data, underlying data, we know that we need to use frameworks, we need to be auditing, we need to be retraining and retraining our models, right? All of these things are going to have an impact. on the environment. And the situation is once we start gaining these best practices and standards and we can apply them in an efficient way, that's really the best way to go about it is not to say, no, we've got to minimize how we're leveraging AI models and actually cut out things that are going to make it better, especially when it comes to reducing bias. But we have to be thinking smarter. and working smarter so that we do have less of an impact on the environment.

  • Speaker #0

    From a business point of view, why should companies actually care? Does tackling AI bias and going green really make business sense or is it just about doing the right thing?

  • Speaker #1

    As of today, there's a bigger interest. for our clients and this kind of things and they are numbers that shows that this is a question that is asked more and more in RFPs and RFIs. We also know that for example in Europe the CSRD which is the global CSR regulation at the moment is asking us to be accountable for all of the value chain which means our customers and our suppliers and from a business perspective being able to deliver. on these kind of things, being able to answer to this question and making sure that we have the right level of implementation in these kind of topics is going to allow us to have at least A, answer to the requirements, but also be able to have a differentiator in the type of products that we sell. We know that accessibility is a differentiator today in a lot of places. And for our customers, being able to propose these kind of services and being able to showcase that they're working toward this is also a differentiator. differentiator towards their customers. So it creates a value creation positive cycle that I think it is important to keep in mind. So in a very cynical and like money driven way of saying the world doing this actually make us more efficient, more performance. So we have a bigger profit because our products are spending less money in the way they work, but we're also getting more clients and more business. So from a business perspective, is very easy to justify. And I'm sure that Angeline will add some interesting points.

  • Speaker #2

    I fully agree. So energy efficient AI models, it lowers operational costs. It increases efficiency.

  • Speaker #0

    It's straight away good business. It's a leaner infrastructure. It's the optimization of storage, lower maintenance expenses. So yes, everything improves the operational costs. Plus, it's a competitive advantage and drives innovation because the moment you make a team that's focused on ethical AI and sustainability in the long run, you unlock creativity, you unlock future-proofing of solutions, and yeah, you get ahead essentially with diverse and inclusive. dreams. So there was a study with PCG. It shows that a 19% of innovation boost in companies with above average levels of diversity. So yes, it makes business sense. It makes innovation. It gives you a competitive advantage.

  • Speaker #1

    So what now? What do we do with all this? If you're a developer, a designer, or... it. making decisions that shape tech. How can you help build AI that is not just smarter, but also fairer and more sustainable?

  • Speaker #0

    So yes, it's a shared responsibility. And as people who are designing and developing AI solutions, we need to ensure that our systems are fair, transparent, unbiased. One, we need to use diverse teams and representative data sets. We need to regularly conduct the audits. We need to implement explainable AI so that we can cross-question it and understand how the decisions are being made. And we need to advocate for diversity in AI teams so that we're reducing blind spots.

  • Speaker #2

    Yeah, and AI certainly has a role too, exactly what Shalina is saying. And when we add on a layer of AI to do AI governance, for example, how can the technology itself be used. to all of these objectives that we've been talking about throughout this. I think that would be extremely powerful.

  • Speaker #3

    And I will just come back to what Shalit was saying in the beginning, which is like, we need to keep thinking that this is a tool. And as a tool, it is something that we need to govern. So we need to implement the framework. We need to have the right people working on it. And we need to make sure that we do the check that make us follow up and make sure that things are working the proper way. is really also our responsibility to make sure that this answers to our needs.

  • Speaker #1

    So to wrap things up, we've seen that building better AI isn't just a nice idea, it's something we have to do. And it's doable, so that's a good news, but it's just a start. So according to you, what will it take to keep pushing for technology that serves everyone equitably and responsibly?

  • Speaker #0

    So whether we like it or not, AI is going to shape the next generation's worldview. If it is biased, so is our future. So we have to tackle it, and we have to tackle it now. So building a world where people see limitless possibilities, regardless of their gender, caste, creed, color, anything, we need to make a fairer world. Because it's not just a technical flaw, it's a societal issue. And the question is not whether... AI can be fair. It's whether we have the will to make it so.

  • Speaker #2

    And I think this issue around governance and making sure that IT systems and AI systems are governed in the right way to get the right outcomes, as Shaleen's saying, that we do have a fair world and equitable outcomes. And all of these things is so critical and important. We want to see that governance standards continue to rise. and that they get implemented as best practice in every industry. So that's something that we're already seeing a lot of change on, which is great, but it's something that we have to keep pushing for and keep advocating for. And there's a role for everyone to play in this because AI touches everyone's lives, right? So all of us are responsible for getting our feedback back to companies, making sure we get the transparency we need, making sure that as systems progress, we are

  • Speaker #3

    we're being critical are we being treated fair are we pushing to create the world that we want to see i think to to highlight angeline's and dana i think we are all responsible for this so one of the aspects we haven't mentioned because we're talking really about the company but is about how everybody needs to be educated in this and how we need to try to make sure that every person whether it's ourselves our family our kids our community is aware that This is something we are all responsible for and that we keep advocating and expecting this from our companies, our governments, the schools where we are everywhere. We need to be, once again, and this is a word I use a lot, but we need to be mindful of the role we play in this game. Because if we are not asking the questions, if we are not being there, if we are not trying to change things and voice out the need that we have to. make this an important topic and have it on the table, it just gets thrown away with other things. And in a very cynical world where we are driven by financial market indicators, we also need internally to be able to continue proving that this brings value and that it makes sense. So that means knowing what we're following. It means having KPIs that we can showcase and being able to track these evolutions in order to make sure that this is taken seriously at every level, even though already... being there and voicing it out is an important part of the journey.

  • Speaker #1

    Thank you so much Bettina, Chaline and Dana for sharing your insights. It's been a very thoughtful conversation.

  • Speaker #0

    Thank you, Caroline.

  • Speaker #3

    Thank you. Yeah,

  • Speaker #2

    thank you so much.

  • Speaker #3

    Always so fun to be with you guys talking about this amazing topic.

  • Speaker #0

    Always.

Share

Embed

You may also like