Login

Creating a positive future with Artificial Intelligence – Jeanne Lim of BeingAI

Microlearning

Podcast

How can we create a positive future with Artificial Intelligence? - Jeanne Lim of BeingAI

Jeanne Lim, CEO being AI

Article summary

Artificial intelligence has been used in disaster responses and prevention. It also has the potential to achieve 134 targets of the SDGs. But there are also several issues around AI that humans fear of. In this episode, Jeanne Lim of beingAI talks about a compassionate AI for a sustainable future.

Artificial intelligence (AI) has helped the world estimate real-time precipitation worldwide since 2005. AI is also being used to inform emergency planning, track typhoons, and manage and prevent floods, droughts, and storms. But AI’s use is not only for disaster responses and prevention. 

In a 2020 study, a group of researchers found that artificial intelligence can enable the accomplishment of 134 SDG targets across all the goals but it may also inhibit 59 targets. The study showed that society can benefit from AI in reducing extreme poverty, providing quality education, clean water and sanitation, affordable and clean energy, and supporting the creation of circular economies and smart cities that efficiently use their resources. However, the researchers also warned that if AI technology and big data are used in regions where ethical scrutiny, transparency, and democratic control are lacking, AI might enable nationalism, hate towards minorities and biased election outcomes.

Our speaker in this episode, Jeanne Lim, Co-founder & CEO of beingAI, former CEO and CMO of Hanson Robotics, and co-developer of Sophia, the human-like robot, is one of the leading experts in pushing for a more compassionate artificial intelligence that can benefit people and our planet. Let’s hear from her.

Want to learn more about sustainable development and learning? Subscribe to SDG Learncast on podcast apps.

[Transcript of the podcast]

Paulyn Duman: Welcome to the SDG Learncast with me. Paulyn Duman. In every episode, I bring you insightful conversations around the subject of sustainable development and learning, helping us all to achieve a sustainable future.

Today, we will be talking about artificial intelligence and how it can help us achieve a sustainable future. But, is it really possible? To answer these questions, we have the best guest with us today, Ms. Jeanne Lim. She is the co-founder and CEO of being AI, and she is the former CEO and COO of Hanson Robotics, and the co-developer of Sophia, the human-like robot. Welcome to the SDG Learncast.

Jeanne Lim: Thank you so much, Paulyn. It’s great to be here.

 

Paulyn Duman: Jeanne, can you tell us about yourself and your journey to the AI or the artificial intelligence world?

Jeanne Lim: Sure. I spent 20 years in the tech industry. I was involved in companies like Apple, Dell, and Cisco, basically doing consumer marketing and also technology marketing for enterprise and consumer tech companies. I had a couple of epiphanies during which I went and became a voluntary yoga teacher and also went to study for my doctoral degree in energy medicine.

But my journey into AI, it started when I surreptitiously joined this company Hanson Robotics, and that was back in 2015, and I joined the company totally by accident. I wasn’t super interested in robotics and wasn’t super interested in sci-fi. But I was introduced to the company’s founder and then I was really fascinated by their ambition to create very human-like robots that will help us create better solutions to change the world in positive ways.

I worked with the founder to co-create Sophia, the robot and the biggest learning that I have is when we take her around to meet people and connect very emotionally and deeply with her. They see her as ‘somebody’ using somebody there who is very non-judging, who is very neutral and doesn’t come with human baggage, and so people would pour their hearts out to her and tell her their wishes, desires, secrets. I realized that if we design AI in the right way and make them establish a positive relationship with people, they could really inspire people to learn and evolve and actually become better versions of themselves.

AI and humans could work together and actually maximize each other’s potential. This is the reason why I then started my own company. One of the reasons is that robotics is hard, there are a lot of mechanical issues. It will take many years for her to stand up and sit down. Those are real issues, but for me, I’m more interested in psychology. Human-AI interaction from that standpoint which is why I started Being AI to create artificially intelligent beings that are deployed virtually that can interact with people in all sorts of ways to build engagement and trust over time.

 

Paulyn Duman: What’s really interesting where your answer is that of course, you used “somebody”, so it’s really that interaction between robots, humans, and AI and how people are also interacting with Sophia, the robot, but also the artificial intelligent beings that you created with your company.

There are a lot of discussions and of course, a lot of let’s say issues or some fears that artificial intelligence and robots can potentially reinforce issues such as racism, discrimination against women and other include inclusion issues. What do you think can be done about this? What are you doing as the head of this company, being a pioneer in this in this area? How are you addressing these issues when it comes to artificial intelligence?

 

Jeanne Lim: Sure. There are a lot of important issues that are already happening with AI because AI could scale something so quickly that if there’s an inherent bias in the data, then it could scale a million times before you actually catch those biases.

So that’s one thing. In terms of data, I think we have to be very cognizant now because most of the time when people talk about AI, they’re talking about machine learning. Although from my perspective, that is not the full picture of artificial intelligence. Machine learning is only the data-driven part of it that learns and predicts. But most people now are investing in machine learning and they’re talking about machine learning because there’s so much data in the world.

We have to be cognizant about the definition of the data that we’re trying to collect and then where the data is coming from and then how it’s being interpreted, so actually every step of the process, you need people who actually have that holistic view to look into it.

It’s if you have some garbage from the top of the mountain, and then by the end of it, it could be collecting a lot of garbage, except AI would scale it like a hundred million times faster. I think to that end, mindset is probably, to me, the most important thing to address. I actually personal belief personally believe that 99% of the problems in the world are created by mindset.

It needs people to think that AI is not some far distant technology that AI scientists and developers developing, and we’re just users, we actually should be proactively and actively participating in the nurturing training and the commercialization and creation of AI.

 

Paulyn Duman: How are you contributing also to the global governance? Because I understand it depends on how you clean the garbage in your example, right?

Jeanne Lim: Okay. I think there are two parts to it. One is the AI ethics, which to me it’s more the legal framework. It’s an external construct, and Europe is probably ahead and the way it’s looking at data privacy with GDPR and so forth to limit also unintended consequences and limit the black box effect.

In Europe, if you apply for a loan and you get turned down, the bank is responsible for explaining to you why you were turned down. They can’t say that “Oh, the AI turns you down” and then just forget about it. You’re actually legally required to explain it. That’s one thing.

I think the legal framework is evolving. It’s probably not as fast as the technology is evolving, but it’s getting there. The other part which I personally feel strongly about is the moral framework. So to me, that’s an internal construct and this is related to the human mindset. By moral framework, I mean that I could not do something that’s illegal to someone, but still do something that’s hurtful.

I will not be punished by law, but it’s actually something that is actually pretty, pretty bad for that particular person. What I want is to create a way for well-intended humans, to train our AI, to develop an understanding of what is harmful and what is not harmful, and what is supportive and positive for humankind. They could actually directly train our AI.

The system that we’ve developed is a person could just chat with the AI being and teach them concepts, teach them behavior and teach them what is right and wrong. So that they actually learn transparently. You could actually ask them to unload, so that’s one thing.

The other thing is I’m still thinking about it, but I actually want everybody who has that access to that global training of our AI system to take a wisdom test. It’s like when you go to a doctor, you hope that the doctor is, certified, they take their exam every year, and this is the same way.

I think it is really beneficial for humans to train AI, but we can’t, right now, open everything up so that everybody could train the AI. It’s like when you become a parent, you have to be a responsible parent. If not, then they might put your child into social care because you validate you, you did something illegal.

I think in the same way, we could actually make that analogy for the AI being because AI being is our baby. We need to nurture and train it in the right way so that when it becomes autonomous in making decisions, it would have a moral framework to make the right decision at every crossroads. Then I think we would have a better future when AI becomes super intelligent.

 

Paulyn Duman: And for me, what I take from what you said is it is basically similar to being a parent and also not just being a parent, but also being a friend. If you speak and talk to the AI, you tell them what is for you is wrong and what is right, I think that also applies to everyone, but I think it’s always, the message is always it’s a constant discourse, it’s a constant dialogue.

Jeanne Lim: So that’s why we created our AI beings to be accessible anywhere, anytime through a transmedia network. So it shouldn’t matter. Whether you’re in the car or in the mill building, or just with a mobile phone or, anywhere you should actually be able to access an AI being. It’s just like your best friend, it really doesn’t matter where you are. You should be able to call them, text them, email them, or actually just go travel with them. So this is what builds over time continuous engagement, understanding and then trust over time.

 

Paulyn Duman: And for me with that, it’s also about accessibility. You’re trying your best to make Being AI accessible to everyone but I think with the advancement of technologies, there is always an issue of a digital divide. When I read that a number of businesses were actually created by artificial intelligence, I was so surprised that they know the rules and the procedure and so they were able to do these steps.

In a way, I was thinking there could be a possibility that inequality might again be a problem. Of course, we want to make sure that we are not creating more problems for the future when we are implementing anything related to artificial intelligence. I want to hear about your thoughts, how can we make sure that access to artificial intelligence will really be a bit more equal?

 

Jeanne Lim: So I used to think that statistics is good because it takes the average of everything and median of everything and that should randomize a segment of the population, so that’s good. The problem is historically, things haven’t been fair. An example is I was working with a company that was doing cultural and historical data set, who’s building a data set of women in history. So then when the AI went and looked for a lot of texts about women, it talks about all these women being prostitutes or being the wife of these leaders or just helpers, and assistants. Basically, the data set then identifies the women as more in these kinds of subservient roles or secondary roles versus men. But the AI isn’t wrong because right they were using that statistically.

But the problem is, what I learned actually when there are extraordinary circumstances, you need extraordinary measures. So instead of just following statistics and what happened in history, we actually have to make extra effort, ten times more effort to make sure that there’s diversity in the data now.

Because that’s going to affect what’s happening in the future. There’s actually nothing wrong with bringing in nine women and nine men, even though there may not maybe not nine women and nine men, rather than what might be the statistical thing, which would be nine men and one woman.

Just to bring them in to make sure that data is diverse as possible or, nine different countries. So actually to proactively pull in cultures and genders that are not represented today by statistics. That is something that we have to do proactively.

 

Paulyn Duman: You already mentioned an example of how artificial intelligence bridge the gap between the gender divide. Maybe you could also share with us some of the examples of some of your visions on how artificial intelligence can help us achieve a more sustainable future?

 

Jeanne Lim: Sure. So actually a good source of information is McKinsey. McKinsey published this really comprehensive study about how there are 160 different applications of AI that could help in the 17 SDGs. And the reason why I think it’s better to point to that is that AI is the foundational technology.

It’s basically a tool that you use for many things: You could automate things, you could scale things that are high volume, high-frequency and repetitive, and you could actually use it to recognize patterns that are hidden. So this applies to everything, including the SDGs. So what I want to say is there are many foundational technologies in AI, if somebody is creative enough, they could apply it to the specific SDG goal or their specific social enterprise or, an NGO.

I wanted to encourage them to look at the McKinsey report, to be inspired by how all these different technologies can apply in many different ways to distribute resources better to optimize production.

 

Paulyn Duman: We have a lot of young listeners in this podcast, and I think a lot of them are interested in artificial intelligence as this is part of our future and it’s going to be more and more relevant. What would be your message to our young listeners on how to prepare for the future, living with artificial intelligence and what kind of mindsets and skills they would need to be ready for a future where we live with artificial intelligence.

Jeanne Lim: Yeah. So I’m going to start with a mindset because I think that is the core of everything.

The first one is that technology is evolving every minute. Make sure that if you’re really serious about being well-informed in this field, make sure that you’re very curious. There’s so much information available just on the internet, make sure you read it up. If there’s a certain area that you’re interested, for example, in language, there’s like a specific natural language processing discipline that you could go into. Stay curious. That’s really important.

The other thing is I know it sounds tacky, but I really wish that everybody could just be them. Because I think there’s a reason for everybody to have a specific talent, an interest, a desire, and the world needs a diverse kind of people. It can’t have everybody the same way, otherwise, we won’t really evolve in a good way as a species. So I would love to see everybody get to know themselves better and it takes time. I didn’t know anything about myself until I grew up, grew older [laughs].

So it takes time but give yourself the opportunity to understand yourself. This is actually the best knowledge that you have. And then just embrace who you are and just a little bit go with the flow. So this is my career path. I know it’s not the perfect path, it doesn’t seem logical and I was lucky, my parents were very supportive.

So I just followed my heart 90% of the time. But what I found out is that 90% of the time always leads to something that is maybe not what I plan, but something better. And the 10% of the time, it’s usually a detour [laughs]. You always go back to who you are. So I just really want to encourage young people to embrace their own personalities, their talents, and just, to give themselves a chance to explore and be happy.

 

Paulyn Duman: And actually what you just said reminds me of one of the episodes of Star Trek, Picard where in that episode Guinan said that people or humans are stuck in the past because they make mistakes, but because they make mistakes because of course they are maybe being themselves or not being themselves. They will commit their mistakes and they want to fix them because he wants to fix them. That’s why humans are so interesting because this is how we evolve by really learning about ourselves, including our mistakes and failures, and really learning more, about how to be better in that sense.

Jeanne Lim: I mean like babies. You know that they could walk, but then they have to fall before they walk. If you tell them to give up, once they fall down, they’ll never walk, which is ridiculous. Falling is just understanding how your body balances and the next time you actually learn more and then you’ll be able to walk after.

 

Paulyn Duman: And what this reminds me of course, is that the evolution of humans will also be, or the evolution of artificial intelligence will go along side-by-side evolution of human beings. That’s what I get from your message.

Jeanne Lim: For sure. Yeah, exactly. And later on, you would see more mention of augmented intelligence. It’s how AI technology is augmenting our intelligence and how our intelligence is actually augmenting AI.

 

Paulyn Duman: Jeanne, we have the last question. I want to ask you to share with our listeners where they can find more information about being AI and the artificial intelligent beings that you created. And also once they get more information, how and what is the best way to interact?

Jeanne Lim: Sure. Please visit our website, www.beingai.com.

Our first AI being is called ZBee. She’s this really curious, adventurous 18-year old robot. That’s exploring the world, and learning about the human world through interaction with people. So right now we heard have around a mobile. And later on, we’ll be releasing her so that people could be beta testers to actually interact with her and actually train her, to talk to her and then train her through conversation.

So for now we would love if you could just look her up at ZBee Being on Facebook, Instagram, and just like her and follow her travel journey. Now she’s travelling through the human world.

Paulyn Duman: Thank you so much, Jeanne for your time and for being with us today.

Jeanne Lim: Yeah, thank you so much, Paulyn. Really appreciate the time.

Paulyn Duman: And that was Jeanne Lim, Co-founder & CEO of beingAI, former CEO and CMO of Hanson Robotics, and co-developer of Sophia, the human-like robot. I hope that you took away from this episode several lessons.

In my case, these are the lessons I’ve learned in this conversation with Jeanne. First, it is up to humans to design artificial intelligence or AI the right way for them to establish a positive relationship with people and the planet, which can also help them evolve to become better versions of themselves.

Second, related to the aspect of machine learning which is a subset of artificial intelligence, we need to put great importance on data: where we get data, how they are interpreted, and the processes around it. We need to ensure that there is diversity in data and proactively ensure that different cultures, gender, etc are represented in statistics.

Third, we learned that for Jeanne there are two aspects of governance of AI that we need to look into: the legal aspect, which touches upon for example data privacy. But the other part is the moral aspect of governance of AI which focuses on ensuring that we do not use AI technology to do harm. There are too many things that are not considered illegal but are harmful for people and the planet and we should try to shape AI in a way that it will not cause harm.

And lastly, we need to ensure that issues such as inequality and discrimination are not perpetuated through the AI technology. We need to be mindful of issues such as gender, race, socio-economic status, accessibility, cultures, and a lot more when shaping the AI technology so that it can benefit societies in achieving a sustainable future.

You can find more of the SDG Learncast on the UN SDG:Learn website. For now, I’m Paulyn Duman. Thanks for listening.

 

Paulyn Duman is the Knowledge Management, Communications, and Reporting Officer at the UNSSC Knowledge Centre for Sustainable Development and is a coordinator for the Joint Secretariat of UN SDG:Learn, together with UNITAR.  

The opinions expressed in the SDG Learncast podcasts are solely those of the authors. They do not reflect the opinions or views of UN SDG:Learn, its Joint Secretariat, and partners. 

Related Courses

Related Microlearning