India’s approach to regulating AI is good, says Andrew Ng

Andrew Ng, the founding lead of the Google Brain team and former chief scientist at Baidu, juggles multiple roles as a teacher, entrepreneur, and investor. He is currently the founder of DeepLearning.AI–an edtech company, founder & CEO of Landing AI–a software provider for industrial automation and manufacturing, general partner at AI Fund, and chairman and co-founder of Coursera, besides being an adjunct professor at Stanford University’s Computer Science Department.

Andrew Ng, the founding lead of the Google Brain team and former chief scientist at Baidu, juggles multiple roles as a teacher, entrepreneur, and investor. He is currently the founder of DeepLearning.AI–an edtech company, founder & CEO of Landing AI–a software provider for industrial automation and manufacturing, general partner at AI Fund, and chairman and co-founder of Coursera, besides being an adjunct professor at Stanford University’s Computer Science Department.

In an interview, he shares his views on the OpenAI fracas, loss of jobs to generative artificial intelligence (AI), the heated debate around artificial general intelligence (AGI), and global regulation of AI, among other things. Edited excerpts:

Hi! You’re reading a premium article

In an interview, he shares his views on the OpenAI fracas, loss of jobs to generative artificial intelligence (AI), the heated debate around artificial general intelligence (AGI), and global regulation of AI, among other things. Edited excerpts:

Please share some quick thoughts on the developments at OpenAI.

Sam (Altman, CEO of OpenAI) was my student at Stanford. He interned at my lab. I think he’s been a great leader. What happened was pretty tragic and it could have been avoided (the interview was conducted a day prior to Altman returning as CEO of OpenAI). OpenAI has many valuable assets, and reportedly more than $1 billion in annualised revenue, many customers, and a phenomenal product. But its governance structure is now very much discredited. Earlier, there were all sorts of arguments about why a nonprofit structure is preferable, but this incident will make investors shy away from the clever arguments for very innovative governance structures.

OpenAI is the poster boy of Generative AI but there is also much concern over jobs being lost to Generative AI tools.

For a lot of jobs, Gen AI can augment or automate just a small fraction of it–let’s say 20% of someone’s job could be automated using GenAI. That means that it’s beneficial both to businesses and to individuals, but we need to figure out which 20% can be automated, and then use GenAI to get that productivity boost. I’m not minimising the suffering of the much smaller number of people whose jobs will be fully automated. I think we owe it to them (those impacted) and create a safety net for them. But in the vast majority of cases, AI today is good enough only to automate part of someone’s job. And that often means that people in that job who use AI will replace people who don’t.

AI experts including Yann LeCun, Fei-Fei Li, and yourself disagree with others like Elon Musk, Geoffery Hinton, and Yoshua Bengio that the world is close to building an AGI machine that can outsmart or overpower humans. Why is there such a heated debate over this topic?

Fewer Asian countries have been caught up with the AI extinction hype. It’s more of a European thing. The most-widely accepted definition of AGI is that AI would do any intellectual task that a human could do. I think we’re at these decades away from that–maybe 30-50 years away. It turns out that there’s a number of companies and people who are optimistic about achieving AGI in 3-5 years.

But if you look carefully, many of them have been changing the definition of AGI, and thus are quickly lowering the bar. If we ask: Is the machine sentient, or self-aware, it will be a philosophical question. And I don’t know the answer to it because it’s not a scientific question. But imagine if we were to set a very low bar–some very simple test to declare machines sentient, it would lead to very sensational news articles saying machines are sentient. So, I’m not sure whether coming up with a very narrow technical definition is a good thing.

What’s your take on the host of global regulation around AI such as the new US executive order, the G-7 Hiroshima Process, and the UK Summit on AI Safety, to name a few? While we certainly need regulation, will a surfeit of regulations end up stifling innovation if not implemented well?

We need good regulations on AI, and clarity on how we should or should not take any AI to areas such as healthcare, etc. The EU (European Union) and AI Act was thoughtful in some places and flawed in some places. It’s a good idea to take a tiered approach to AI risk like using AI for screening people for jobs — that’s high risk, so let’s make sure to mitigate that risk.

Unfortunately, I’m seeing much more bad regulation around the world rather than good regulation. I think the US White House executive order is a bad idea in terms of starting to put your burdensome reporting requirements on people training large models. It will stifle innovation, because only large tech companies will have the capacity to manage compliance. If something like the White House executive order ends up being enforced in other countries too, the winners, arguably, will be a handful of tech companies while it will become much harder to access open-source technology.

I’m not very familiar with the India approach to regulation. But my sense is that India is taking a very light touch. And I think, India’s approach is good. In fact, most Asian nations have been regulating AI with a much lighter touch, which has been a good move.

But the misuse of deep learning algorithms to create deep fakes and nude images certainly need regulation.

I think regulating AI applications is a great idea. Deep fakes are problematic, and certainly one of the most disgusting things, has been generation of non-consensual, pornographic images. I’m glad regulators are trying to regulate those horrible applications.

Yet, having more intelligence in the world via human intelligence or artificial intelligence is a good thing. While intelligence can be used for nefarious purposes too, one of the reasons that humanity has advanced over the centuries is because we all collectively got smarter and better educated and have more knowledge. Slowing that down (with regulation) seems like a very foolish thing for governments to do.

What’s the traction you’re seeing in the generative AI course on Coursera?

Gen AI is the fastest growing course of 2023 with about 74,000 enrollments in just the first week. That probably won’t surprise you since there’s very high interest in learning Gen AI skills and technical skills. We are seeing a lot of traction on developer-oriented content, as well as by a non-technical audience because GenAI is so disruptive; it is changing the nature of work for a lot of professions. I hope that Gen AI for everyone and other courses on Coursera can help people use the technology to become developers that build on top of the technology and create a layer that is valued by the builders (of the platform).

But AI today is also writing its own code. This has given rise to much confusion over what courses one should take, and what skills one should acquire. How can individuals and companies address this concern?

Fear of job losses is a very emotional subject. I wish AI was even more powerful than it is. But realistically, it can automate only a fraction of tasks done by the economy. There’s still so much that Gen AI cannot do. Some estimates peg that GenAI can automate may be 15% of the tasks done in the US economy; at the higher end, maybe approaching 50%. But 15 or 50%, these are huge numbers as a percentage of the economy. We should embrace it (Gen AI) and figure out the use cases. In terms of how to think about one’s own career, I hope that the ‘Gen AI for everyone’ course will help with that.

Given the accelerated pace at which Generative AI is moving, should companies be early adopters or wait for these technologies to mature?

Any company that does a lot of knowledge work should embrace it (Generative AI), and do so relatively quickly. Even industries that don’t do knowledge work seem to be becoming more data oriented. Even things like manufacturing and natural resource extraction, which traditionally did not seem like knowledge-driven, are becoming more data- and AI-oriented, and it turns out that the cost of experimenting with, and developing with Gen AI is lower than earlier with AI.

A good recipe for senior executives is to take a look at the jobs being done by people in the company, break the jobs down into tasks, and see which tasks are amenable to automation. And given the low development costs, definitely every large enterprise should look at it (Gen AI). Even medium enterprises may have the resources to develop genuine applications, and so do small enterprises.

That said, enterprises fear that Generative AI is not safe enough for them, given limitations such as hallucinations (making up facts) and security.

Gen AI is absolutely safe enough for many applications, but not for all applications. Part of the job of not just C-Suite executives, but of companies broadly, is to identify and take advantage of Gen AI within those applications. Would I have Gen AI tell me how to take a pill as a drug for a specific ailment? Probably not. But Gen AI can be used for a lot of applications including using it as a thought partner to help with brainstorming, or improving your writing or, helping to summarise information or process information. There are a lot of use cases in corporations too where it can boost productivity significantly.

There is another debate around the efficacy of LLMs versus SLMs, or small language models. I would love your take on this topic?

Think about the use of a CPU (central processing unit) that spans different sizes for different applications. Today, we have very powerful data centre servers and GPUs (graphics processing units), and yet we have a CPU running on my laptop, a less powerful one running on my phone, an even less powerful one running my watch, and an even less powerful one controlling the sunlight in my car.

Likewise, a really advanced model like GPT (generative pre-trained transformer) should be used for some very complex tasks. But if your goal is to summarise conversations in the contact centre, or maybe check grammar, or for your writing, then maybe it does not need to know much about history, philosophy, or astronomy, implying that a smaller model would work just fine.

Looking at the future. there will be more work on edge (on devices) AI, where more people will run smaller models that can protect one’s privacy too.

There are some who believe that open source foundation models and open source LLMs will be more transparent than proprietary models, especially when trained by unsupervised algorithms. Would you agree?

There are models where you can probably understand the code better and where, perhaps, the transparency is higher. But even for open source, it’s pretty hard to figure out why a specific algorithm gave a certain answer.

You juggle multiple roles, one of them being the general partner at AI fund. It’s said that many so-called AI companies are just building thin wrappers around AI and Generative AI applications, and hence will not survive. What should deep tech entrepreneurs consider when building companies today?

While it is true that there have been some companies that are a thin wrapper on some APIs (application programming interfaces), there are actually a lot of opportunities to build really deep tech company atop new Gen AI capabilities. Take a different analogy–I don’t think that Uber is a thin wrapper on top of iOS but you can do a lot of work on top of them. AI Fund focuses on venture scale businesses, so we tend to go after businesses with a significant market need, and we can build technology to address. And we are building things that involve deep tech that are not that easy to eradicate.

But I would tell an entrepreneur– just go and experiment. And frankly, if you build a thin wrapper that works, great. Use those learnings to maybe make that wrapper deeper or go do something else that is even harder to build. This is a time of expansion, creativity, and innovation but innovators must be responsible. There are so many opportunities to build things that were not possible before the new tools were available.

AI is a very transformative technology that benefits every individual and every business. That’s why I was excited to teach ‘GenAI for Everyone’. Because we have to help every individual and every business navigate this. I hope that people will jump on, learn about technology, and use it to benefit themselves in the communities around them and the world.

Source link

credite