Stanford HAI on 2024 AI Global Index Report

Show Transcript
Bloomberg Apr 17 03:23 · 27.8k Views

Russell Wald, Deputy Director at Stanford Institute for Human-Centered Artificial Intelligence, discuss his key takeaways from their annual AI index report, focused on technological advancements, public perceptions, and the geopolitical dynamics surrounding its development. He speaks with Annabelle Droulers and Haidi Stroud-Watts on "Daybreak Australia".

Disclaimer: This content is for informational and educational purposes only and does not constitute a recommendation or endorsement of any specific investment or investment strategy. Read more

Transcript

  • 00:00 Russell, this is a a
  • 00:02 really significant report for for the industry as a whole.
  • 00:04 But
  • 00:05 talk to us about what was perhaps
  • 00:08 your biggest take away from from the numbers or or the the findings this time around.
  • 00:13 Well, naturally every year you're going to get more and more of an increase and we're seeing this hockey stick just slide up.
  • 00:20 And in that case, we're
  • 00:21 seeing a lot of interesting things here.
  • 00:24 A couple of them that I find most interesting is that AI does beat humans in some tasks and we're starting to see that gap get closed
  • 00:32 more and more.
  • 00:34 And one other key important thing that I think we're seeing is that
  • 00:38 AI
  • 00:39 is really being dominated by industry.
  • 00:42 The frontier models are coming from industry.
  • 00:45 And just one interesting anecdote that comes from this.
  • 00:48 In 2017, the Transformer model that was created by Google that's really led to this explosion and the GPT aspects of
  • 00:56 some of these models
  • 00:59 and has helped create generative AI that was $930 to train in 2017.
  • 01:05 And if you go to last year,
  • 01:07 their latest model that just came out in Gemini Ultra was $194,000,000 to train.
  • 01:13 So just in that short amount of time, you can see the difference of how much money it costs to really build these models.
  • 01:21 Yeah, it's, it's really wild numbers actually, Russell.
  • 01:24 But when you have that sort of dynamic going on where you've got some
  • 01:27 clear leaders in the market and and they have so much in terms of resources and so much should deploy into creating new models, how hard then does it make it for for others to play any sort of catch up and what does that mean then
  • 01:40 it's not just hard actually for an industry to industry competition for them,
  • 01:45 it's also hard for other stakeholders.
  • 01:47 So I of course work at an academic institution and this is where we see a large struggle for this.
  • 01:52 Not one
  • 01:54 major university in the world could train a ChatGPT model today if it wanted to,
  • 01:59 because it doesn't really have the resources possible to do that.
  • 02:03 So you're constraining not just
  • 02:05 stakeholders within industry, you're constraining stakeholders within academia
  • 02:10 and even to some extent government not having the resources to do this.
  • 02:14 That has a lot of implications.
  • 02:15 What does it mean for a limited select few companies that are doing this?
  • 02:19 And who do we want setting the rules of the road and who's at the table?
  • 02:23 So we need to start really thinking about how robust this ecosystem is and how much we want to have it exclusively within one
  • 02:30 area.
  • 02:31 Industry does great work, but it's not the only type of work that can be done here.
  • 02:37 I wanted to ask about
  • 02:38 AID coupling and sort of
  • 02:40 the different pursuits by the likes of the
  • 02:43 US and China, right.
  • 02:44 For example, when it comes to regulation, you talk about this, you know, 50% plus increase in regulation that was seen just last year.
  • 02:51 Is that mostly
  • 02:52 in the
  • 02:52 US?
  • 02:53 And do we kind of face a reality eventually where there's going to be divergent regulatory environments
  • 02:59 for similar AI technology?
  • 03:02 Yeah, the question is whether there's any interoperability with some of these regulatory environments.
  • 03:07 But
  • 03:08 in terms of who's leading the pack probably on this, it is the EU with the most robust
  • 03:13 regulations, and that's from the from the EUAI Act.
  • 03:16 China actually does have some
  • 03:18 strong regulations in place.
  • 03:20 There's a question in terms of enforcement.
  • 03:22 And the
  • 03:23 US is frankly lagging in terms of regulations specific to AI.
  • 03:27 Now, that doesn't mean that there's not regulations in place that
  • 03:31 in U.S.
  • 03:32 law that could easily be applied to AI.
  • 03:34 And that's really important to understand that distinction.
  • 03:37 And furthermore, there are a couple of ideas that are in Congress that are being talked about.
  • 03:42 And of course, there is the President, President Biden's Executive order on Artificial Intelligence that has been quite helpful
  • 03:50 and will guide companies.
  • 03:51 But it certainly is not the same in terms of
  • 03:54 regulation and there's not parity between states on regulation, which will be a challenge.
  • 04:01 Where do you see any?
  • 04:02 I don't know if it's even possible to predict the apex when it comes to investment enthusiasm for generative AI.
  • 04:10 That's really interesting because this year we had some
  • 04:12 kind of fascinating findings that came out from this.
  • 04:15 So overall in terms of AI investment, it kind of has been down for the last two years.
  • 04:21 And
  • 04:21 to some extent we
  • 04:22 from a global perspective
  • 04:24 and we might say that's probably related to interest rates and where they're at.
  • 04:28 But if you look at in the
  • 04:29 US specifically, there was a 22% increase last year and private investment in AI
  • 04:35 putting the US global share at about 67%.
  • 04:39 Now if you're China, this was quite surprising, but there was a a
  • 04:43 -44%
  • 04:45 from
  • 04:46 previous years and that's giving them about an 8% total in private market share which is
  • 04:51 frankly was a surprising finding
  • 04:56 and and Russell just quickly what do you think is sort of the?
  • 05:00 The attitude toward AI, because I think there's been
  • 05:02 quite a high
  • 05:04 level of nervousness from a lot of people that their job for instance, would be replaced.
  • 05:07 Is that sort of subsiding or is there sort of more of an awareness of the of the shape or role that AI will take in peoples lives?
  • 05:14 There's an awareness, but it also it depends on where you're geographically located.
  • 05:19 So
  • 05:20 and what your attitudes are,
  • 05:21 those are not across the board
  • 05:24 views and actually people in
  • 05:27 a very western industrialized states have a more pessimistic view towards AI than those that are not.
  • 05:33 And you know there's some hypothesis this report itself is a
  • 05:37 very heavily data focused report and doesn't necessarily give inference and or
  • 05:42 a
  • 05:43 strong analysis
  • 05:45 to that end.
  • 05:46 I think one reason might be
  • 05:49 there is more of a fear factor within
  • 05:53 industrialized countries that some of these jobs can easily
  • 05:57 be more easily replaced
  • 05:58 than they might have expected at the industrial level.