Who Are GetSmarter, 2U?

Peter Morgan Bio: Peter is Head Tutor for the 2U short course on artificial intelligence offered through the Said Business School at Oxford University. He is also CEO of an AI consulting company Deep Learning Partnership he founded in London ten years ago, www.deeplp.com. Before this he was a Solutions Architect in the Internet industry. He holds an MBA and was studying a PhD in physics before joining industry.

GetSmarter, a 2U, Inc. brand, is an online learning expert with over 10 years’ experience in developing high-quality online short courses from the world’s leading universities including Harvard, MIT, and Yale. The courses are designed with professionals in mind and provide a career-focused curriculum that equip working professionals with the expertise required for the workplace, through an immersive online experience.

The brand offers short courses across a range of current topics, with their data-driven course selection ensuring that they select the most relevant courses for your future skills requirements. As a student you will also gain access to edX’s Career Engagement Network, which offers exclusive resources and tools to support your professional journey and drive your career forward.

oxford-ai-course-banner

With AI in the news at the moment, what will all these advances mean for everyone?

That’s a great question. So basically, how did we get here? We’ve been trying to understand and build AI systems since before the 1900’s, but really the field of AI started in 1956 with a conference in New Hampshire and John McCarthy coining the phrase artificial intelligence. Finally, after about 70 years, we’re at this point where AI has human level capability and beyond in language and creativity. Those are the two kinds of pillars, if you like, of intelligence, creativity and understanding language.

What does it mean for everyone now, the general public? It means they can have a really smart version of Google or other search engines that presently return links to questions they are asked by typing in a query or using voice. If you type a query into the new AI language models, such as ChatGPT and Google Bard, you will get back not just links, but fully formed answers written in natural language that humans can understand.

So that’s actually a big deal that changes the returning links paradigm, to returning responses to search queries in terms of a few well written and, hopefully, accurate paragraphs. I say hopefully because these systems can sometimes return incorrect information, but this is constantly being improved upon. This new way of information retrieval avoids clicking on links, which is a perfectly good but not as efficient way of finding things out.

Now instead, with these large language models (LLMs), the information we request comes back neatly packaged for us, no additional work required. That’s a big deal for the general public, like next level Google search.

For businesses it’s huge because content creation is a major part of a lot of what companies do in order to build their products and services. You can think of law, medicine, healthcare, any industry sector. If you think about it, a lot of time spent by white collar employees is on gathering and curating information then creating documents, whether that’s Word, PowerPoint, Excel, etc.

These large language models can do this, and they do it quicker, more cheaply and often more accurately than humans can. We are not quite there yet where we can trust the responses of these large language models completely.

That’s where we still need a human in the loop; we need to check the responses. The accuracy of these models over time is getting better as these large language models get trained on more and more data with more and more feedback provided to them.

AI is going to impact government, it’s going to impact business, and it’s going to impact scientific discovery. So it really is a huge deal. It can help in scientific discovery; it can read academic papers on a subject and contribute to that field. It can take data and provide summaries on what that data means, organise and extrapolate it and come up with new hypotheses.

Basically, anything a human can do with language and creativity, these large language models can do. Because they’ve been trained on all the publicly available data in the world, which is a huge accomplishment in itself.

It took two years and around $20 million and a data centre filled with around 20,000 GPUs or something like that. So that’s what the big companies like Microsoft and Google have been working on the last couple of years, and we’ve only recently seen the fruits of that labor coming to fruition, coming to market. ChatGPT was released on November 30th 2022, so only six months ago, and Google Bard not long after that.

What is the most important consideration when it comes to AI?

That really depends on if we’re talking about the public or businesses. And there are many important considerations, I don’t think there’s just one. The first one is safety; can we keep these systems safe? Another is how reliable the information is that these LLMs return.

The third consideration is what’s going to happen in the future as these models get more and more powerful. Geoff Hinton, who’s just resigned from Google (although he was 75, so maybe he was going to retire anyway), said that this has happened much, much quicker than what he and others working in the field had anticipated.

Hinton is like the godfather of AI. He started out in the field 40 years ago by coming up with the back propagation algorithm, which is one of the important machine learning algorithms in these deep learning neural networks, which is what large language models are.

In fact, LLMs are just artificial neural networks (ANNs) or deep learning. He was one of the main players and, even though we’ve had some severe AI winters, he kept going. And he ended up at Google. Google bought his company around 2013 when he was an academic researcher at the University of Toronto.

And he said, you know, I wasn’t expecting this sort of level of performance for another five to ten years and it’s happened. And since the release of ChatGPT and GPT-4, a lot of scientists and industry practitioners are signing open letters and petitions to say perhaps we need to slow the research down until we can put guardrails around it and make it safe. And keep these models away from people who don’t really know what they’re doing because they’re so powerful.

They can be used for misinformation, cybersecurity attacks. There are also military and national security implications, that kind of thing. It’s like having a super smart person in every domain, in one system. These are generally intelligent systems. They’ve been trained on all domains, so safety is an important consideration moving forward and we’re seeing a lot of movement now in governments around the world with statements saying that they are going to form departments especially focused on looking into AI safety and regulation.

All of these expert voices signing and speaking out and signing these petitions have spurred government into action, which is a good thing. It’s wise to put guardrails around these very powerful technologies. Just as you wouldn’t want the average person walking into a nuclear power station or a biolab, so we need to think carefully about regulating these large language models going forward. So that’s an important consideration.

oxford-ai-course-banner

Why are AI courses so important now more than ever?

AI courses are so important now more than ever simply because the models are here now. These AI large language models are here now and they’re very powerful and they’re very good at what they do. So people naturally want to learn about these new types of intelligence species, you could say artificial species, amongst us. It’s a kind of natural curiosity. What’s this new kind of intelligence, what does it mean for me personally at a deep, perhaps existential, level.

And businesses want to know how does this affect my competitive positioning in the marketplace? How do I adapt these into my business workflows, for example? So everybody naturally wants to learn about these new technologies, whether it’s for personal or business reasons.

What are the benefits to learning about AI?

The benefits are that the more we know about something, the less dangerous it becomes. It’s fear of the unknown. If something’s a black box, and it seems to be very powerful and intelligent, more intelligent than us, one of our first reactions is fear, and rightly so.

To reduce that uncertainty will reduce our fear levels. Remember fear is directly proportional to uncertainty. So by learning about these large language models, these AI systems a little bit of the history, how they work, we can reduce the uncertainty around this technology.

And then we’re in a better position to really start to make decisions, either personal or business decisions around how we are going to react and respond to these AI systems that have so recently been unleashed upon us. And if you’re in business, then obviously you will gain competitive advantage if AI can be integrated into your business workflows and products before, or more efficiently than, your competitors.

If AI can “out-learn” humans, what do we need to be aware of and what are the dangers?

The dangers are, I think, threefold. One is bad actors using AI for bad things like disinformation, maybe influencing elections, that type of thing.

For nefarious purposes, whatever they may be. The second danger is people who don’t know enough about these models making mistakes with them, with this very powerful technology. It’s like giving a knife to a baby, right? It’s a very dangerous thing to do, even with adult supervision. And the third risk is these AI systems becoming autonomous with agency and deciding they want to take over the world.

Now this is the Terminator scenario, right? But it’s worth thinking about and taking seriously. So those are the three main dangers. Governments are now starting to step up to regulate this new and potentially dangerous technology as quickly as they can. But it will take time to get right, and it’s prudent to acknowledge that no system of regulation is full proof.

How will AI change the ways in which we learn and work?

Let’s start with education and how AI will change the ways we learn. So now we basically have an intelligent counterpart. We’ve had Google search and other search engines for a while, that return web links. But now we have a new information retrieval system, one that outputs the answers to us directly, so we don’t have to click on links and rearrange the information ourselves. This is all done for us by the AI.

The information comes in a nice, neat package for us and that’s how we’ll learn. That’s how it will impact learning not only on a personal level, but on an even wider level. We have an education system from preschool, primary school, middle school, senior level colleges, and then universities.

These new tools are being implemented in all of these school systems. They’re being rolled out as we speak. Already we’ve seen a bit of a knee jerk reaction where public schools or state schools have just banned them completely from the school system. But that doesn’t stop the students at home using them.

We’ve seen pull back from that a little, with perhaps an overreaction there. And something interesting we see happening is the state schools have tended to ban them a little bit more readily than the private schools who have more embraced the technology and accepted that these LLMs are here to stay. Let’s teach the students how to use them effectively in their education.

This will give them the skills they’ll need when they go into the workforce, because these LLMs will be embedded in every company and organisation in the world over the next few years, just as search engines are today. So that’s education, and we need to realise this is an ongoing and dynamic process. Recent progress in AI capability has happened very quickly. These LLMs have only been publicly available for around six months, so this is very much a work in progress; watch this space as they say.

What about work? They will be embedded in every company in the world, sooner rather than later. Otherwise, companies will become uncompetitive and go out of business.

We can think of these large language models as having the brightest person in the world, in any subject, sitting next to us. Companies cannot afford not to integrate that kind of intelligence into their business applications and workflows. We are already seeing all the banks and the major corporations integrating these models as quickly as they possibly can.

Let’s take software development.

LLMs can provide a 50% improvement in productivity for software developers, this has been measured and benchmarked. There are about 20 million software developers in the world. That’s one example. Content generation is another. People have been benchmarking performance compared to a human.

How does output increase when we integrate large language models into our business processes? There is always improvement, so businesses are already integrating this technology into their systems and workstreams, into their products and services.

What is the biggest challenge for us when using AI?

Probably the learning curve. Any new technology requires some learning to be able to become proficient and then eventually become an expert in it. But there’s a lot of information and training material available on LLMs and generative AI on the Internet. Also, books from all the major publishers are appearing on how to use LLMs in various ways.

Large language models, augmented software development, content creation, there’s loads of videos on YouTube about how to use this evolving technology. So there’s no shortage of information. And I believe, after the summer coming up, as we move into the autumn term, most schools will be integrating AI into their syllabus.

In summary, the biggest challenge for us when using AI is in becoming proficient with it, understanding its capabilities, and how we can apply LLMs and generative AI in the real world.

oxford-ai-course-banner

How should businesses implement AI?

Most companies don’t have the types of experts needed to implement these AI systems into their workstreams. Therefore consulting companies are being brought in to help companies adopt and integrate these systems.

Big companies, like Google, Microsoft, Facebook, Amazon, the larger banks, they already have highly skilled machine learning engineers working there. They will be able to take this technology and rapidly integrate it into their workflows without needing outside help.

However, the smaller companies, they will have IT departments, but they won’t have machine learning engineers just sitting around. They will probably have to go to consultants to help them implement solutions.

Accenture, McKinsey, Deloitte, all of these consulting companies have announced huge investment into AI training and training their staff, IBM also. We’re talking thousands of employees, they’re training them up to be able to go into companies and help integrate this technology into their work streams.

Thank you so much for giving me the opportunity to answer these questions on artificial intelligence. I have an AI consulting company myself, and this technology is really hot at the moment and I expect demand to just keep growing over the next few years.