Autores: Bryan Tan Raju Chellam
Singapore is developing ethics and governance guidelines to shape the development and use of responsible AI, and the island nation’s approach could become a blueprint for other countries. Reed Smith partner Bryan Tan and Raju Chellam, editor-in-chief of the AI Ethics & Governance Body of Knowledge, examine concerns and costs of AI, including impacts on owners of intellectual property and on workers who face job displacement. Time will tell whether this ASEAN nation will strike an adequate balance in regulating each emerging issue.
Transcript:
Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with everyday.
Bryan: Welcome to Tech Law Talks and our new series on artificial intelligence. Over the coming months, we'll explore all the key challenges and opportunities within the rapidly evolving AI landscape. Today, we'll focus on AI and building the ecosystem here in Singapore. My name is Bryan Tan, and I'm a data and emerging technology partner at Reed Smith Singapore. Together, we have with us today, Mr. Raju Chellam, the Editor-in-Chief of the AI E&G BOK, And that stands for the AI Ethics and Governance Body of Knowledge, initiative by the SCS, Singapore Computer Society, and IMDA, the Infocomm Media Development Authority of Singapore. Hi, Raju. Today, we are here to talk about the AI ecosystem in Singapore, of which you've been a big part of. But before we start, I wanted to talk a little bit about you. Can you share what you were doing before artificial intelligence appeared on the scene and how that has changed after we now see artificial intelligence being talked about frequently?
Raju: Thanks, Bryan. It's a pleasure and an honor to be on your podcast. Before AI, I was at Dell, where I was head of cloud and big data solutions for Southeast Asia and South Asia. I was also chairman of what we then called COIR, which is the Cloud Outage Incidence Response. This is a standards working group under IMDA, and I was vice president in the cloud chapter at SCS. In 2018, the Straits Times Press published my book called Organ Gold on the illegal sale of human organs on the dark web. I was then researching the sale of contraband on the dark web. So all of that came together and helped me when I took over the role of AI in the new era.
Bryan: So all of that comes from dark place and that has led you to discovering the prevalence of AI and then to this body of knowledge. So the question here is, so tell us a little bit about this body of knowledge that you've been working on. Why does it matter? Is it a game changer?
Raju: Let me give you some background. The Ethics & Governance Body of Knowledge is a joint effort by the Singapore Computer Society and IMBA, the first of its kind in the Asia-Pacific, if not the world, to pull together a comprehensive collection of material on developing and deploying AI ethically. It is anchored on the AI Governance Framework 2nd Edition that IDA launched in 2020. The first edition of the BOK was launched in October 2020 before GenAI emerged on the scene. The second edition focused on GenAI was launched by Minister Josephine Thieu in September 2023. And the third edition, the most comprehensive, will be launched on August 22, which is next month. The most crucial thing about this is that it's a compendium of all the use cases, regulations, guidelines, frameworks related to the responsible use of AI, both from a developing concept as well as a deploying concept. So it's something that all Singaporeans, if not people outside, would find great value in accessing.
Bryan: Okay. And so I see how that kind of relates to your point about the dark web, because it is really about a technology that's there that can be used for a great deal of many things. But without the ethics and the governance on top of that, then you run into that very same kind of use case or problem that you were researching on previously. And, you know, So as you then go around and you speak with a lot of people about artificial intelligence, what do you really think is the missing piece or the missing pieces in AI? What are we not doing today?
Raju: In my view, there are two missing pieces in AI, especially generative AI. One is the need for strong ethics and governance guidelines and guardrails to monitor, if not regulate, the development and deployment of AI to ensure it is fair, transparent, accountable, auditable. Two, is the awareness that AI, especially GenAI, can be used just as effectively by bad actors to do harm, to commit crimes, to spread fake news and even cause major social unrest. So, these two missing pieces which are not mutually exclusive can be used for good as well as bad. It's the same with the beginning of the airplanes, for instance. Airplanes can be used to ferry people and cargo around the world. They can also be used to drop bombs. So we need strong guardrails in place. And the EU AI Act is just a starting point that has shown the world that AI, especially GenAI, needs to be regulated so that companies don't misuse information that customers and businesses entrust to it.
Bryan: Okay. Let's just move on a little bit. about cybersecurity. Some of your background is also getting involved with cybersecurity, advising, consulting on cybersecurity. In terms of generative AI, do you see any negative impact, any kind of pitfalls that we should be looking out for from a cybersecurity point of view?
Raju: That's a very pertinent question, given that the Cyber Security Agency of Singapore has just released data that estimates that 13% of phishing scams might be AI-generated. There are also two darker versions of ChatGPT, for example. One is called Fraud GPT, F-R-A-U-D, and the other is called Worm GPT, W-O-R-M. Both are available on the dark web. They can also be used for RAAS, which is ransomware as a service that bad actors can hire to carry out specific attacks. Being aware of the negative possibilities of GenAI is the first step for companies and individuals to be on guard and keep their PII or personally identifiable information safe. So as a person involved in cybersecurity, I think the access that bad actors have to the tool that's so powerful, so all-consuming, so prevalent, can be a weapon.
Bryan: And so it's an area that we all need to kind of watch out for. You can't simply ignore the fact that alongside the tremendous power that comes with the use of GenAI, the cybersecurity aspects should not be ignored. And that's something we should pay attention to. But other than just moving away from cybersecurity, other than cybersecurity, any other issues in AI that also worry you?
Raju: The two key concerns about AI, according to me, other than cybersecurity, are number one, the potential of AI to lead to a loss of jobs for humans. And the second concern is its impact on the environment. So let me delve a little deeper. The World Economic Forum has estimated that AI adoption could impact 85 million jobs by 2030. Goldman Sachs has said in a report that AI could replace about 300 million full-time jobs. McKinsey reports that 14% of employees might need to change their careers due to AI by 2030. This could cause massive unrest in countries with large populations like India, China, Indonesia, Pakistan, Brazil, even the US. The second is sustainability. According to the University of Massachusetts at Amherst study, the training process for a single AI model can emit 284 tons of carbon dioxide. That's equal to greenhouse gas emissions of roughly 62.6 petrol-powered vehicles being driven for a year in the US. These are two great impacts. People, governments, companies, regulators have yet to grapple with because these could become major issues by the time we turn this decade.
Bryan: So certainly some challenges coming up. I remember that for many years you were also an editor with the Business Times here in Singapore. And so this question is about media and media content, specifically, I think, digital media content. And, you know, with that background in mind, now looking closely at generative AI, do you see generative AI affecting the area of digital media and content generation? Do you see any interesting use cases in which gen AI has been applied here?
Raju: Yes, I think digital media and content, including the entire field of advertising, public relations, marketing, will be or is being currently impacted to a large extent by Gen AI, both in its use as well as in its potential. To the extent that many digital media content companies are actively looking at GenAI as a possible route to replace human labor. In fact, if you look at the Hollywood Actors Union, they all went on strike because producers were turning to GenAI to even come up with movie scripts. So, it is a major concern because unlike previous technologies which impacted the lowest ranks of the value chain, such as secretarial jobs, for instance. GenAI has the potential to impact the higher or highest value chain, for instance, knowledge workers. So they could be threatened because all of their accumulated knowledge can be used by GenAI to churn out material as good as, if not better than, what humans could do in certain circumstances. Not in all circumstances, but with digital media content, most of the time, the GenAI model is not augmenting its human potential, it's also churning out material that can be used without human oversight.
Bryan: So certainly a challenge and interesting use case in the field of digital media content. Last question, and again, back to the body of knowledge and talked a little bit about the Singapore government's involvement in this area. In Singapore, we do have a tendency for a lot of things to be government-led. In this particular area where we are really talking about frontier technology like artificial intelligence. Do you think this is the right way to go about it to let the government take the lead? And if so, what more can be done or should be done?
Raju: That's a good question. The good part is that Singapore is probably one of the very few countries, if not the only one where the government tries to be ahead of the curve in tech adoption and in investing in cutting-edge technologies such as AI, quantum computing, biotech, etc. While this is generally good in the sense that a clear direction is set for industry to focus on, is there a risk that companies may focus too narrowly on what the government wants instead of what the market wants? I don't know. More research needs to be done in this area. But look at the numbers. Spending on AI-centric systems is set to surpass 300 billion US dollars worldwide by 2026, as per IDC estimates, up from about $154 billion in 2023. So Singapore's focus on AI and GenAI was the right horse to bet on. And it's clear that AI is not a fad, not a hype, not an evolution, but a revolution in tech. So at least we got that part right here. Whether we will get the other parts or the components right, I think only time will tell.
Bryan: Okay, and final question, looking at it from ecosystem point of view, various moving parts, various parts working together. For you personally, if you had a crystal ball and a wishing wand and you could wish for anything in the future that would help this ecosystem or you think will aid this ecosystem, what would that be?
Raju: I think there is need for stronger guardrails and some kind of regulation to ensure that people's privacy is protected. The reason is, GenAI can infringe upon the copyrights and IP rights of other companies and individuals. This can lead to legal, reputational, and or financial risks for the companies using pre-trained models. GenAI models can perpetuate or even amplify biases learned from the training data, resulting in biased, explicit, unfair or discriminatory outcomes, which could cause social unrest if not monitored or audited or accounted for accurately. And the only authority or authorities that can do this are government regulators. So I think government has to take a more proactive role in ensuring that basic human rights and basic human data is protected at all times.
Bryan: With this, I thank you. Certainly a lot more to be done in building up the ecosystem to encourage and evolve the role of AI in today's world. But I want to thank you, Raju Chellam, for joining us. And I want to invite you who are listening to continue to listen to our series of Tech Law Talks, especially this one on artificial intelligence. And thank you for hearing us.
Raju: Thank you, Bryan. It's been a pleasure.
Bryan: Likewise. Thanks so much, Raju. I really enjoyed doing this.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Trancsript is auto-generated.