Ethics in AI's Wild West: Navigating Regulation, Bias, and Responsibility in 2025
AI ethics isn't about sides—it's about balancing values. As rules evolve, it's up to companies and users to push for transparency and ethics. As Michelle Frost told Hannes Lowette, “Don’t be evil” is still the key principle in our AI future.
Read further
Introduction and Background
Hannes Lowette: Hello, I'm Hannes. I'm here in London today with Michelle Frost. I work as a principal consultant for a company called Axxes in Belgium. We're going to be talking about ethics in AI. But first, allow Michelle to introduce herself to you.
Michelle Frost: Thank you. As Hannes said, my name is Michelle Frost. I'm an AI advocate at JetBrains. I started my career in machine learning in 2017, when I started digging into the ethical implications of machine learning and AI about six years ago and then completed last year Master's of science in AI at Johns Hopkins.
Hannes Lowette: So you're well established in this field. And I have so many questions because as a developer, I look at AI and I see all these problems, and I'm told that in your industry they call this the alignment problem. Trying to find two different angles in AI and balancing between them. For instance, when it comes to things like bias, things like performance, there's always a whole bunch of factors to take into account. Can you elaborate a little bit on that?
Michelle Frost: First of all, back up a little bit and tell you that the field has been changing a lot. There are fewer experts than you would think. I think a lot of people are still trying to figure out things as we go together and try to help each other make the right decisions.
When it comes to ethics, operating and AI, values. Right. As humans, we have a set of values. Everyone you meet believes in something. And for every belief that a person holds, there's an opposing belief. And I won't get into whether one of them is right or wrong. But we see the same thing in AI. It is very human, intertwined. It's very closely related to our own beliefs.
So the value alignment problem in AI is basically trying to find, as ethicists, what is the middle ground of each of these conflicting values. You mentioned, for example, bias. And oftentimes when we look in the field of fairness, which is the mitigation of bias, for many models that are based off of historically biased data, in order to increase fairness, then sometimes accuracy takes a hit. So do you prioritize model performance or do you prioritize fairness?
Or another instance, privacy versus security? A lot of facial recognition technologies fall into this category. So should CCTV cameras be able to immediately know who you are? There's a security level for that, right? There's national security being prioritized, but at the risk of individual privacy.
Hannes Lowette: Exactly. So when you look at these tradeoffs, I'm assuming that there's no one size fits all for any application, right? This is always something that needs to be reevaluated for everything that you built with AI?
Michelle Frost: There's a couple different frameworks for going about this. But the thing that we always have to answer is it depends. Anytime someone asks a question in this field, it's truly "it depends." Because it depends on the company, on the product, on the industry it operates within and the surrounding regulations or maybe lack thereof, as the political climate of the government that the product operates in or its users. In the case of different US companies that also have users in the EU.
Hannes Lowette: Also, maybe the nature of the source data might be one of the things that warrants certain approaches. Especially if you're dealing with medical records, for instance, or whatever. That all will play into that.
Michelle Frost: It depends.
Recommended talk: Machine Ethics • Nell Watson • GOTO 2019
Regulatory Changes and Political Impact
Hannes Lowette: It all depends. Well, but we live in interesting times. Now, for you as a viewer, this interview takes place at the end of January 2025. Which means that Trump has just taken office and there's a whole bunch of interesting things happening with an announcement of Project Stargate that's going to lead to considerable investments in AI. Deep Seek is a thing that is popping up just today. Literally. And of course, deregulating AI in the US. Let's maybe start there. A lot of the previous AI regulations have been lifted as one of the first acts that Trump has done after taking office. What's your take on that?
Michelle Frost: Well, maybe we'll back up a little bit and look at what the previous administration Biden did, too. So, in 2021, perhaps, first introduced the AI Bill of Rights. What should we be doing as a country to start investigating the different types of harm that could come from AI considering innovation. Right? We still want to enable innovation, but we also need to anticipate the risks and create regulations that kind of create guardrails.
Then in 2023, the Biden AI executive order was signed. There was also quite a bit of funding that was put into different organizations like NIST, the National Institute of Standards and Technology. NIST actually has been truly a leader the last few years of creating different frameworks for bias evaluation and mitigation for human centered design and AI processes. So there was quite a bit of movement happening in the last few years in the US for AI regulation, and for prioritizing ethical development of AI.
And day one of the Trump administration that all pretty much got put on pause and seems to be thrown out the window. On the 20th, which was the first day of his administration, he repealed 50 different executive orders of the previous administration, basically saying they needed to be reviewed. One of them was the AI executive order. The next day there was then a new Trump era EO that basically stated AI development should be free from social agendas. And I can't remember the exact wording on it, but basically stating that AI development should be free from social agendas, which we have different ideas of what a social agenda is. So interpret that as your will. And then on the 23rd of January, the Stargate project was announced and it was going to receive about 500 billion in funding. And the stakeholders, 40% is OpenAI, 40% is SoftBank.
Hannes Lowette: Now, that's a lot to take in. And I think it's pretty safe to say that we're going to see lots of movements in the field of AI. What do you think is the biggest risk of the deregulation that has just happened?
Michelle Frost: Well, it has been very interesting because responsible AI in the last few years has become a bit of a hype term in and of itself. At the risk of maybe upsetting a few people, I'll make a statement that in the last year I've really seen quite a bit of check boxes happening very similar to the 2020 DEI check box where you have to really decide as an individual, does this company truly care about responsible AI practices or are they responding to public interest and kind of the broader schema of what's been happening?
And already with the Trump administration, we're seeing a lot of corporations across the states pulling back their DEI and AI efforts. So it's likely to follow that the same AI standards that they've adopted will do the same, which is it's going to really put it on to individual corporations, number one, to decide, do they care about ethics in AI? Do they care about trying to align their values with their development? And then at the consumer level, as a consumer, I have the power to put pressure on companies to say I will support you if you meet my values. AI is going to take a play in that as well.
Hannes Lowette: Yeah. And in the past we've seen not just in the field of AI, that if you want to push for DEI, if it doesn't come from a higher authority, like big corporations are often not very motivated to take steps in, let's say, the right direction, in that matter. So that's not really hopeful then. Is there a good side to it as well?
Michelle Frost: Yeah. So I will back up to and say there is going to have to be short term actions taken and you will see companies take them. The risk with that too is a lack of standardization. Right? So if every company has its own set of rules, it's hard enough when we're deciding between different governing bodies, let alone individual corporations.
The benefit, of course, people will advocate for is economic advancement, is innovation, without strings attached. There are valid arguments on that, just like we talked about the value conflict alignment issues on every side of these issues, there is a valid argument on each. I think for me personally, we often looked to tomorrow and tried to predict the problem that is going to come tomorrow. So AGI has been such a buzzword the last few years. Is AI coming for my job?
Hannes Lowette: Is the AI coming to take my job?
Michelle Frost: We have a lot of fears of tomorrow, of next week. And yet we've had a lot of these issues for years, if not decades in machine learning and more old school AI that we still haven't addressed and solved. So in some ways, I think that it's a distraction from problems that we still have that need to be addressed.
Hannes Lowette: Right? So you're basically balancing like the technical advancements of what AI models can do technically and what science goes on behind that versus solving the problems with the current types of models that are clearly still there, but it hasn't been solved yet.
Michelle Frost: Yeah. And I'll add on to that. This is really a socio-technical problem, right? The history of AI is deeply connected to the history of humans. Right? So all these issues that we're talking about, they're really human problems, and now we're amplifying them with a certain technology that is expediting and speeding up some of them. But the problem seems to try and be solved either solely in policy or solely on the tech side, solely in regulation and lawyers. But it really needs the all encompassing of these parties to invest in the problem if we're ever going to make meaningful advancements there.
Recommended talk: Where AI Meets Code • Michael Feathers • GOTO 2024
Job Impact and Industry Changes
Hannes Lowette: All right. Now we see that using AI responsibly leads to some great things. As a programmer, for instance, what I see is like having tools that help me write code, understand code, like all that sort of stuff. That is fantastic. And it increases productivity. From my point of view, I don't think AI is going to take my job anywhere soon because I truly believe that understanding the business problem is not something that AI is good at at the moment.
But there's definitely some jobs in the tech industry that might be affected, especially when you see a lot of offshoring jobs, that is typically work that is very well speked out, and then the programming happens somewhere else. That is very thinking-poor jobs or low on the insights required from the person who's doing the job. I think those jobs might be affected. What other fields do you see big impacts coming from these AI models that we may have to be aware of?
Michelle Frost: Forecasting is something that we've always gotten wrong. I can no longer make the weather analogy because they're actually pretty decent at that now.
Hannes Lowette: There's good models for that.
Michelle Frost: There's good models for that. But when it comes to shifts in labor, and what automation is going to take away historically, we've gotten it wrong. So I could make some assumptions. But also, every time we make an advancement in a certain model, it changes the game. We're trying to make predictions in a shifting landscape. We're also trying to build software in a shifting landscape. That's hard. In some respects we've always been doing that. But this one moves a little bit faster.
Hannes Lowette: Like we have agile.
Michelle Frost: Right? Hopefully agile-ish.
Hannes Lowette: Agile-ish.
Michelle Frost: So I've seen quite a bit of social jobs at risk. Like restaurant workers or store clerks, there is a human element there, but for the most part, it's a transaction.
Hannes Lowette: Right?
Michelle Frost: I look at a menu, I say, hey, could I please have XYZ? And this to drink and there's a transaction that happens. Now there's a human element to that. I think some people would automatically say, well, actually, I go into a restaurant so that I can have some sort of social interaction or element, even if it's just interacting with a server.
Hannes Lowette: Right.
Michelle Frost: However, that is a field that has been identified as one that is possible to be replaced.
Hannes Lowette: Right?
Michelle Frost: I think that that is going to really come down to a company and, for example, at a restaurant level, how they want their image to be. Do you want to come in and interact with some sort of automated system? Probably not. Now other jobs? You mentioned earlier, deciding distribution shifts. What do we send here? What do we send there? We have models for that too now. Or the transportation of goods.
From developers - we get asked this a lot. Is AI going to take my job? Is AI going to code for me? You mentioned it already. The abstraction of business into code is not a simple feat. You can give any of the models as much context, but you still need the human guidance to do it. I do think that there's going to be a really interesting shift in how junior devs learn to code. And there might be more of a responsibility of more senior members to kind of say, okay, these are the things that I learned that maybe your AI model won't teach you.
Hannes Lowette: That will definitely be accelerated. I would be lying if I would say that I didn't learn a lot of C# features from using ReSharper for a long time, because that has basically already been giving very low level code suggestions like, hey, you might write these couple of statements in a different way or whatever. Now it's more reasoning about a bigger scope. We see AI reasons about how classes are structured and whatever. We see more of that, and that's definitely going to help junior devs. But I also don't see AI replacing junior devs on my team. It might get them up to speed faster. It might help them be productive faster, it might add to the team. It's not replacing people soon, at least.
Michelle Frost: I still think that there's the human element, right? We call it artificial intelligence. Yet there's still so much about human intelligence that we can't define. Right. So how can we make a 1 to 1 replacement even if we tried to? It's going to fail. And I think even if we tossed out everything to some JIRA board that had an AI integration with the chat bot model that said, go build this feature, I mean, the value that developers bring in uncovering pieces of the business logic that we hadn't thought of or thinking more long term strategy of how does this look in five years from now or how do I hand it off to another developer? You're not going to get that same thing.
Hannes Lowette: No, that's well, at least not there yet.
Recommended talk: Beyond the Code: Deploying Empathy • Michele Hansen & Hannes Lowette • GOTO 2022
New Developments and Market Disruption
Hannes Lowette: Maybe let's talk about one of the other things I mentioned at the beginning of the interview. We have seen some announcements for very significant AI investments that are going to happen soon. What effect are those going to have?
Michelle Frost: This is a fun week to answer that question. As we were talking about earlier, Deep Seek is completely disrupting everything. And so for any of us who might not have this information, and truthfully, this is kind of unfolding today as we speak. Deep Seek came from a Chinese company, and they released a paper, I think it was just last week. There's about 11 pages, basically about how they took reinforcement learning on top of a large language model. They were able to do some compression on the models to basically make it a little bit cheaper, faster. And there are claims that their benchmarks are going higher. I think today it's one of the highest apps on the App Store. And US tech markets are crashing.
So in a week from now, we'll have more information for you. But I think it also kind of creates an interesting moment for us is that this is going to keep happening. We're going to have these little breakthrough bursts, where something that we thought we needed a ton of money for - look at 500 billion has been invested into Stargate.
Hannes Lowette: That's a big investor.
Michelle Frost: That will be - maybe right now, they're probably scratching their heads saying, do we really need this?
Hannes Lowette: Are we going to come up with this amount of cash as well? Because it's big players, but it's still a tremendous amount of money.
Michelle Frost: It's a lot of money. I think what I read on Deep Seek was that their previous model was 5.5 million for training. We can guess perhaps a little bit more for what was actually released. But if you compare 5.5 million plus, let's round up to ten just to be safe versus 500 billion. That's a huge gap.
Hannes Lowette: Yes.
Michelle Frost: It opens the doors to potential innovation outside of the big tech companies. Previously, if you really wanted to make an advanced or foundational model, you needed the big bucks. So if you can now do the same thing or even better with a smaller budget...
Hannes Lowette: Is that a shift that we're going to see? Because these are very big models, they can do a lot of different things - everybody knows ChatGPT, it's a very generic model that can answer very different questions. Are we going to see more models that are perhaps smaller, cheaper to train, more performant when we run them that are just as good at answering more narrow questions that are being asked?
Michelle Frost: Well, you're starting to kind of touch on the differences between artificial narrow intelligence, which is kind of where we are today. AI that has been trained for a specific task. And we have different ways of benchmarking each of these different tasks. So most of the benchmarks that we're using right now to talk about ChatGPT, for example, are natural language processing tasks. Now AGI is the more generalized version. Right. It can - you'll have one model that can do most of these things pretty well. Aside, artificial superintelligence is where we kind of hope we never go.
Hannes Lowette: There's that Skynet level of intelligence, right?
Michelle Frost: Yeah, we won't speculate. But to answer your question, yes. So could we maybe even break down some of those price barriers to say, okay, give me a model that is specifically trained to do this and be even better at AI in this category.
Hannes Lowette: I guess that's to me as a developer, that's very interesting because these big models, they are quite expensive to run. They take a lot of processing power. They're super expensive to train. I mean, the average company doesn't have a budget to train them, but when you build them into a piece of software, you're probably only looking at a very narrow subset to answer a particular thing. Can we make that cheaper? Can we make it more environmentally friendly to do so? I think those are also like in the ethics debate. It's like, what kind of models are we using for what is still something that we definitely need to get a better grasp on because it's still very young.
Michelle Frost: Then I think that will be where that public trust conversation comes into play too. Especially as we're still reeling from the wildfires in LA, that I think last I looked at, it was over 50,000 acres that were burned. And for context, the size of Manhattan is, I think, about 22,000 acres. So more than double the size of Manhattan.
I have seen quite a few articles that are trying to draw a very straight line between growing AI usage and climate impact. They're very much adjacent. I think some lines have tried to weave that maybe a little bit tighter, but it is something that, as consumers, we have to start asking ourselves, do we understand the environmental impact, my environmental footprint just for interacting with a model like ChatGPT?
I'll be curious to see if maybe in a year from now or hopefully sooner if there's enough public interest in that conversation to be able to see what my usage is. And then it would fall a little bit on the consumers to become really good at prompt engineering, right? Because if you have one person that can get the same answer in 1 or 2 prompts versus another person who might not have been trained or doesn't quite understand how to ask and it takes them ten questions. If you keep doing that, you're going to have a different energy consumption between the two. So I think trying both as consumers, but also maybe as companies elevating that information, that's something we can all do together.
Hannes Lowette: At least making people aware of what the impact is of their behavior. Like if you want to just find something on the internet, use Google and not use the AI model to do that.
Michelle Frost: Unless it's already baked in.
Concerns About Biases and Future Outlook
Hannes Lowette: So that's the investments that are happening right now. And we're going to see some very interesting times there. From a human perspective, the thing that scares me is especially when we see a gender war starting in the US, that's something that's very refreshing. That's definitely happening. If we are going to deregulate these models, we're going to see some models that are heavily biased. I don't want to be the doomsday thinker here, but I see a lot of risk there.
Michelle Frost: I often get named as the doomsday person coming to talk about ethics, dark cloud looming. I really do like to clarify this that I consider myself more of a pragmatic optimist. I wouldn't be in this field if I did not believe in the potential of AI to make lives better.
Hannes Lowette: Right.
Michelle Frost: But we're not going to get there simply because we wished for it or we hoped that we were going to get to that green pasture. It's going to be through intentional effort.We have to be aware of the dark side. We have to be aware of the faults and how we get around them.
For the gender war that's embarking in the States, that's going to be another type of bias that we see. And we already have so many different models across so many industries. Even more classic machine learning models that are driven very heavily by imbalanced data. In the field of fairness, I will bring this up. There's a data set, an old data set called the adult data set. You would never use this data set for anything other than the fact that it has many different protected categories. It has gender, and I need to call out that it only lists male and female. I believe this was assembled sometime in the 90s from census data in the states.
It does try to represent other countries. But if you look at the spread, it is highly U.S. dominated with a few other countries. There are great imbalances between race, between education levels, between different types of occupation. If you're working in the public sector, private government, all these different aspects create quite an imbalanced spread. And if you try and take this data set and again, you would never do anything real with it other than to prove the point that this is what happens when you have biased and imbalanced data. The target feature is predicting if an individual makes a high income or a low income based off of that specific threshold that they set.
It doesn't take much just to say, yes, this is biased. It, for example, on the female and male only available genders in the data set - the false negative rate for females to make over a certain income threshold is way off the charts as compared to the male counterparts, but that still doesn't tell us the full picture. We're going to start getting these data sets that don't tell us the full picture. They tell us the filtered picture that maybe new regulation is imposing, or maybe that someone collected this data set without really having the full picture in mind of saying, did I represent this truthfully or did I represent this how I interpret it?
Hannes Lowette: Right. As you mentioned, the nature of the source data that's going to determine the biases of the models that we train on them. But the source data is often coming from human systems that were biased to begin with. Detecting whether somebody would be a delinquent is something that they have strong training on on prisoners faces. There was very heavy race bias in that one, but that was maybe caused by the fact that police tend to arrest more black people.
And just the awareness that we need to have is that people, when we advance AI is, I feel something that there isn't enough push for at the moment. Are there any ways that if we are training models, if we are developing AI applications, that we can either measure bias or at least have a set of practices that we can apply to make sure that we're kind of balancing for the right thing?
Michelle Frost: Yes. And so when you brought up the source data, that brings up another one of our conflicting values that we talked about earlier. That would be transparency versus perhaps trade secrets of a company. Right? They want to protect their IP. And maybe part of that is the data, or maybe even we're in a situation with healthcare where it would be a breach of our users trust to elevate this data into the public. And it makes it really hard.
So a few different angles. One is actually in procurement. So say you're a healthcare organization or hospital and you're looking to procure some piece of technology, you would want reports on the ethics. You would want a transparency report. You would want to understand what type of data was collected, what has been done in terms of bias measurement, mitigation and management.
I think what we're going to start seeing is just a different life cycle of AI products. So to answer your question, yes, we can measure bias. To date I think there are mid - I want to say it's like 26 or 27 different mathematical measurements of bias.
Hannes Lowette: Okay.
Michelle Frost: That's fun. So try handing that off to a team that maybe hasn't been formally trained in fairness and say, hey, go measure the bias in the system. Then they say, okay, how do I measure the bias in the system? Oh crap, what metric do I use? And that becomes a training piece. From a corporate standpoint, if we are now self-governing and self-regulating, you need both top down encouragement as well as bottom up incentive. You can't have one or the other and not be successful.
There's different ways to measure individual points of fairness, group fairness, you can seek to measure predictive equality without maybe ground truth. So if two individuals from two different attributes of a protected category apply for a loan, for example, do they have an equal chance of getting a positive outcome, assuming that everything else of their attributes are the same? Just the one thing that is different is the protected attribute. Do they have equivalence? That's one measurement. But that doesn't tell us the full picture because it just tells us the predictive parity of the model. What we really want to see is a balancing of ratios between false positives, false negatives across different groups.
And in terms of mitigation techniques, there are several. I personally feel that we haven't really solved the problem. I will add on to that. There's no free lunch, right? There's no one rule, one model that will solve every problem. In some cases, there is a method of pre-processing - design much of this. Right. Have I compiled the data set correctly? Have I split it appropriately?
There are methods in processing that seek to make the model itself aware of potential bias, and almost like adversarial, correct that bias during training. And there's some sort of half attempts on the outer side of a company being like, okay, well, we're aware that this happened, so how can we alleviate options for this group?
I think yes, we need to continue those things, but also the continuous management of the life cycle of the product is going to be very important. So how do we create checks and balances in our process to say, have we measured this? What have we done to mitigate it? What sort of report can we generate to our stakeholders that say, yes, we've done this. And also these are things that we need to continue doing.
Hannes Lowette: Because basically what ideally is needed - and it's way more complex than a food label. But it is a food label for an AI model, like when you're buying this tool from us, the models that are in this, they apply to these criteria and they have these attributes. And you can expect this from them. Right. As a consumer, that's what I would want.
I mean, it's far too complex to have something that simple applied to it. But that brings us maybe to the question like, hey, if I as a company, buy a tool that employs AI, I use that tool. One of my customers uses the tool that I build on top of that, and something goes wrong somewhere, like a license is lost or somebody gets injured. It's a very easy thing to reason about, like who's responsible? Because ultimately that decision might have been made completely down the line by a tool that was purchased and then somebody else built on top of that. Accountability - that's definitely something that we need to worry about, don't we?
Michelle Frost: You asked a key question - who is responsible? And right now we're a little bit in the Wild West. There's a myriad of different lawsuits to choose from on this topic. There was a teenager who a few months ago committed suicide after a conversation with a chat bot. I believe the mother is pressing charges, and I'm not certain where that lawsuit has landed or if it's still in progress.
But who is responsible? Is it - I think this particular instance was Character AI. So should they be responsible or if we're talking about minors, should the parents be involved in how a child is using technology? Is the parent responsible? You know, you can go even further from what happens postmortem to your data that AI has been trained on. Who owns that?
Hannes Lowette: Right. Is there any framework that is being used in the industry, although even maybe the courts haven't picked up on it?
Michelle Frost: I think that there's quite a few individual attempts because of the lack thereof of overall governing bodies. I will always point back to NIST. They have done some really wonderful frameworks across quite a few different zones, including bias. Microsoft has also released their responsible AI standards. Quite a bit of the larger companies have been doing that. IEEE also has their sort of ethical AI considerations.
So you can kind of look right now at corporate and academic, and there's work being done in that space, but we have a collapsed bridge that we have to build up.
Hannes Lowette: All right. But at least it is something that is being worked on individually.
Michelle Frost: It has been. We'll see what the changes in the last couple of weeks bring us. I would be lying if I said I was not skeptical and worried. Yet there are still so many people that have been working in this space that are dedicated, that care about it, that are passionate. And I think that it might just fall upon individual shoulders a little bit more. But also, there are so many wonderful people in this world that do care about what happens to humanity and how we're using our technologies. And I think it's just going to get more tense. But also, the world has been very tense these past few years, and that does impact AI.
Hannes Lowette: So a message of hope, basically to conclude, like humans are going to do the right thing, hopefully?
Michelle Frost: I hope so. Here's the thing. If you look back at the last few thousand years of humanity, you can follow threads across the entire human history of, hey, this thing was new. This is what it changed. This is what we got out of it. It's usually a good thing. There's always conflict there. But something good came out of it, right? And then we go on to that next iteration.
We're in another type of revolution. This one is going to be so interesting because it's impacted by so many things other than just itself. We have this geopolitical climate that's happening. You have this AI arms race between different countries, between different companies. You have public divisiveness, and political landscapes, you have conflicting ideologies. And all of those things are impacting AI.There's an individual responsibility. And I think sticking your head in the sand and saying AI is going to go away, it doesn't apply to me - it's just not going to work in this one.
I think my challenge would be to find a use case that does excite you. Think about something that could make lives better. I would love to see some sort of medical device that would read brainwaves, that would tell people if they're going to have some sort of medical emergency.
Hannes Lowette: What the audience doesn't know is that down here is Super, who is your service dog, right. So you have a stake in this?
Michelle Frost: I do, I do. And it's really frustrating. As a consumer of medical care and as a patient and also as someone that works in AI, I've been super frustrated looking for epilepsy detection devices that would be incredibly beneficial for me. However, I also want to see that there has been some sort of regulation and procurement process. I would want to see what the medical device, what kind of data it was trained on, what's happening with my individual brain data. It doesn't get more personal than that. What could come out of that long term if I trusted a company with this data, but also what the benefits would be.
There was also a study last year that I still think is maybe one of my favorite things that came out of 2024 AI. There was a neurological study in which they played Pink Floyd for a bunch of different people while their brain activity was being read. And at the end of it, the model could predict what songs they were listening to.
Hannes Lowette: Right?
Michelle Frost: The implications for that, for people that have locked in syndrome or that maybe aren't able to communicate effectively - that's brilliant. That would be wonderful. So we should be focusing on those things and maybe not some of the lower level tasks. Some other things I could state that OpenAI has partnered with different companies to help people that are visually impaired so that they know if their milk carton is expired or how to sort their laundry. Those are wonderful ways in which we can make people's lives easier. So for me, I think as technologists, we should be focusing on those use cases and making a choice to work on those types of things.
Hannes Lowette: But that basically holds true for any software system. That's like work on systems that do something responsible.
Michelle Frost: Don't be evil.
Hannes Lowette: Don't be evil. I think that's a good statement to end this interview on. Like don't be evil and use AI responsibly.
Michelle Frost: That's Google's slogan, right? I'll back that. Don't be evil.
Hannes Lowette: Don't be evil. Thank you so much for joining me today. And thanks so much for talking about ethics.
Michelle Frost: Thank you.