OpenAI's Lead Is Contracting as AI Competition Intensifies
1 28OpenAI's rivals are cutting into ChatGPT's lead. From a report: The top chatbot's market share fell from 69.1% to 45.3% between January 2025 and January 2026 among daily U.S. users of its mobile app. Gemini, in the same time period, rose from 14.7% to 25.1% and Grok rose from 1.6% to 15.2%.
The data, obtained by Big Technology from mobile insights firm Apptopia, indicates the chatbot race has tightened meaningfully over the past year with Google's surge showing up in the numbers. Overall, the chatbot market increased 152% since last January, according to Apptopia, with ChatGPT exhibiting healthy download growth.
On desktop and mobile web, a similar pattern appears, according to analytics firm Similarweb. Visits to ChatGPT went from 3.8 billion to 5.7 billion between January 2025 and January 2026, a 50% increase, while visits to Gemini went from 267.7 million to 2 billion, a 647% increase. ChatGPT is still far and away the leader in visits, but it has company in the race now.
1 comments
Re:google has the google.com advantage (Score: 5, Informative)
by LostMyBeaver ( 1226054 ) on Wednesday February 04, 2026 @02:20AM (#65968156)
Give credit where it's due.
I basically stopped using Google most of the time because I could use Copilot for most things. So, I suppose if I were to measure, I google about 70% less than i used to. I mean, most of my googling was figuring out how to do things and these days, I spend my of my time telling copilot to figure out how to do things instead.
That said, I tend to Google when ChatGPT is failing. And well, it fails a lot. It's really just not a very good product.
So, then I use Gemini through Google and more often than not, it gets it right when OpenAI bombs it.
Gemini has become a better set of models than ChatGPT. I probably wouldn't even use ChatGPT if Windows wasn't so utterly intertwined with it.
That said, I pay $10 a month for AI. I have my own LLM server and it's based on a $120 graphic card and it's getting REALLY good now. I don't think I'll be using cloud llms much longer. Thinking models don't need to be big. So, a 10-16GB GPU should be enough. 24 would be nicer for a longer context length though. It's pretty funny that Qwen 2.5 7b actually outperforms the biggest and baddest models if you use it agenticly and tell it to just figure it out. It doesn't need to know absolutely everything. it only needs to know how to research and take notes as it goes along.