Account Info
Log Out
English
Back
Log in to access Online Inquiry
Back to the Top

Google "We Have No Moat, And Neither Does OpenAI"

The text below is a very recent leaked document, which was shared by an anonymous individual on a public Discord server who has granted permission for its republication. It originates from a researcher within Google.
We Have No Moat
And neither does OpenAI
We’ve done a lot of looking over our shoulders at OpenAI. Who will cross the next milestone? What will the next move be?
But the uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch.
I’m talking, of course, about open source. Plainly put, they are lapping us. Things we consider “major open problems” are solved and in people’s hands today. Just to name a few:
- LLMs on a Phone: People are running foundation models on a Pixel 6 at 5 tokens / sec.
- Scalable Personal AI: You can finetune a personalized AI on your laptop in an evening.
- Responsible Release: This one isn’t “solved” so much as “obviated”. There are entire websites full of art models with no restrictions whatsoever, and text is not far behind.
- Multimodality: The current multimodal ScienceQA SOTA was trained in an hour.
While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly. Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months. This has profound implications for us:
- We have no secret sauce. Our best hope is to learn from and collaborate with what others are doing outside Google. We should prioritize enabling 3P integrations.
- People will not pay for a restricted model when free, unrestricted alternatives are comparable in quality. We should consider where our value add really is.
- Giant models are slowing us down. In the long run, the best models are the ones which can be iterated upon quickly. We should make small variants more than an afterthought, now that we know what is possible in the <20B parameter regime.
Google "We Have No Moat, And Neither Does OpenAI"
Google "We Have No Moat, And Neither Does OpenAI"
Google "We Have No Moat, And Neither Does OpenAI"
Google "We Have No Moat, And Neither Does OpenAI"
Disclaimer: Community is offered by Moomoo Technologies Inc. and is for educational purposes only. Read more
5
+0
1
Translate
Report
53K Views
Comment
Sign in to post a comment