How OpenAI’s Thrive Partnership and Chinese LLMs Are Reshaping Enterprise AI Integration

Author: Boxu Li 

OpenAI and Thrive Capital’s Enterprise AI Alliance

OpenAI’s latest strategic move underscores the push to integrate AI deeply into traditional industries. In December 2025, OpenAI took an ownership stake in Thrive Holdings, a new vehicle of Josh Kushner’s Thrive Capital, as part of a partnership to embed OpenAI’s large language models (LLMs) into verticals like accounting and IT services[1][2]. Rather than a cash investment, OpenAI is providing a dedicated research team in exchange for equity – aligning incentives so that both sides focus on AI-driven transformation of legacy business operations. The goal is to infuse AI into manual, fragmented processes in Thrive’s portfolio companies and continuously improve these models via reinforcement learning with domain experts[3]. Thrive has raised over $1B to acquire traditional service providers and overhaul them with AI[2]. This partnership signals a novel approach: OpenAI isn’t just selling API access; it’s vertically integrating by co-building AI solutions within enterprises. “Aligning OpenAI through ownership” ensures both OpenAI and Thrive share a “North Star” of developing leading AI-powered products for those industries[4]. Notably, despite Thrive Capital being a major OpenAI backer, the deal does not lock Thrive into OpenAI’s models exclusively – Thrive can still leverage other models, including open-source ones, wherever they make sense[5]. This highlights a pragmatic truth of today’s enterprise AI landscape: companies will adopt whichever model best fits their domain needs, cost constraints, and integration requirements.

Enterprise LLM Adoption Across US Industries

Across corporate America, enterprise adoption of AI – especially generative AI – has surged in the past two years. A 2024 McKinsey survey found 78% of organizations now use AI in at least one function (up from 55% a year prior), and 71% have deployed generative AI tools[6]. This indicates that LLMs have moved from experimental pilots to “essential business infrastructure” in many firms[6]. Crucially, this trend is broad-based across industries. In finance, banks are using LLMs to analyze research and assist advisors; in healthcare, LLMs draft medical reports and patient communications; in legal and consulting, they summarize documents and generate first-draft content. In e-commerce and travel, generative AI powers customer service and recommendations – for example, Airbnb’s AI concierge can help resolve guest inquiries without human agents. Retail giant Amazon uses AI models to summarize product reviews for shoppers, extracting common likes and dislikes from hundreds of comments into one paragraph[7]. This improves customer experience and speeds up sales conversions. Amazon has also rolled out generative AI tools to help marketplace sellers write better product listings, and even made Alexa more conversational with LLM integration[8]. These examples underscore a pattern: enterprises are integrating LLMs wherever they can automate workflows or enhance user interactions, from drafting marketing copy to powering chatbots and coding assistants.

However, integrating AI at scale is not trivial – many companies still struggle to move from pilot to production. Studies found as few as 5% of GenAI pilot projects achieve rapid revenue gains, with many stalling due to unclear objectives or infrastructure challenges[9]. Nevertheless, the business case for enterprise AI remains strong. Companies that succeed report solid ROI (on average 3.7× returns, per one analysis) and are reallocating budgets accordingly[10]. Enterprise spending on LLMs has ballooned – 37% of enterprises were spending over $250,000 annually on LLM usage by 2025[11]. This willingness to invest reflects a competitive imperative: firms see AI as a strategic technology to boost productivity and innovation. The result is an enterprise AI ecosystem where multiple LLMs coexist. In fact, surveys indicate a “multi-model” deployment pattern is emerging – one report found a majority of companies use at least two different AI models, often mixing offerings from OpenAI, Anthropic, Google, and open-source providers[12]. This multi-model approach allows organizations to balance strengths and weaknesses of various LLMs (e.g. one model for coding support, another for general chatbot tasks) and avoid over-reliance on a single vendor.

U.S. Companies Embracing Chinese LLMs: Cost Advantage and Implications

An unexpected twist in the enterprise AI story is the rise of Chinese open-source LLMs finding adoption in the U.S. Companies that a year ago defaulted to American models are now increasingly evaluating (and in some cases embracing) models from Alibaba, Baidu, Zhipu, MiniMax and others. The reason boils down to a powerful value proposition: these models are often free, “open-weight” (openly released model parameters), and far cheaper to run than their U.S. counterparts[13][14]. For enterprises trying to automate tasks at scale, cost and customizability can trump having the absolute state-of-the-art model. As a result, even marquee U.S. tech firms have begun experimenting with Chinese AI. In a recent example that made waves, Airbnb’s CEO Brian Chesky revealed the company relies heavily on Alibaba’s Qwen LLM for its automated customer service agent – favoring it over OpenAI’s latest models because “there are faster, cheaper models”[15]. Chesky specifically praised Qwen, saying “We’re relying a lot on Alibaba’s Qwen model. It’s very good. It’s also fast and cheap.”[16]. Indeed, Airbnb’s new AI concierge launched in 2025 was built using a mixture of 13 models (including OpenAI, Google, and open-source), but the Chinese Qwen underpins much of the heavy lifting[17]. The cost savings from using Qwen and others have allowed Airbnb to automate 15% of support requests and cut resolution time from hours to seconds[18] – a tangible business impact.

Airbnb is not alone. Some well-funded startups and VCs have also turned eastward for AI shortcuts. Prominent investor Chamath Palihapitiya remarked that his firm moved their AI workflows off of Amazon’s proprietary services onto Moonshot’s Kimi model (from a Beijing-based startup) because it was “way more performant.”[19] Similarly, former OpenAI CTO Mira Murati’s new venture released a tool that lets users fine-tune open models – including eight variants of Qwen – highlighting the interest in building on Chinese LLM foundations[20]. The appeal is clear: “to an average startup, what really matters is speed, quality, and cost… Chinese models have consistently performed well on balancing those three”[21]. Chinese AI labs have aggressively open-sourced models with permissive licenses, enabling anyone to download the model weights and customize them. Alibaba, for instance, open-sourced versions of its Tongyi Qianwen (Qwen) model family under Apache 2.0 license, ranging from 4B to a massive 70B+ parameters[22][23]. This means a company can “pull those weights off the internet and fine-tune on proprietary data” to get a domain-specific model without starting from scratch[24]. A South Korean firm did exactly that – fine-tuning Qwen for government document processing – and cut their costs by 30% as a result[25]. Alibaba reports over 170,000 models have been derived from Qwen worldwide, and that over 90,000 enterprises (spanning consumer electronics, gaming, etc.) have adopted the Qwen family models in some form[26]. These figures signal that Chinese LLMs have quickly gained a foothold among developers and businesses globally.

The implications of this trend are double-edged. On one hand, U.S. AI providers are seeing potential enterprise revenue siphoned away. Every workload handled by an open Chinese model is one not running on OpenAI’s or Anthropic’s paid APIs. As The Wire China noted, if U.S. model developers are losing big clients like Airbnb to Chinese competitors, that is a “warning sign” that the U.S. approach (often proprietary and high-cost) is faltering for some use cases[27]. Frontier model labs like OpenAI, Anthropic, and Google may be forced to respond – whether by lowering prices, offering open variants (indeed OpenAI just released some “open-weight” GPT models[28]), or focusing on truly differentiated capabilities. Another implication is strategic and geopolitical: the increasing reliance on Chinese AI by U.S. firms raises trust and security questions. Thus far, pragmatism around cost and performance has in some cases overridden concerns about data governance. But Washington is paying attention – U.S. regulators added Chinese AI firm Zhipu (maker of GLM models) to a trade blacklist in early 2025[29], and a Commerce Department report warned about security risks of foreign AI and noted that the rise of Chinese models is “cutting into U.S. developers’ historic global lead.”[29][30] There is even momentum in the U.S. to limit the use of Chinese LLMs in government settings, potentially extending to businesses if national security pressures grow[27]. At the enterprise level, due diligence on Chinese AI vendors is increasing – companies must weigh the benefits of models like Qwen or DeepSeek against risks like data exposure or compliance issues. Some mitigate this by self-hosting the open-source models internally (avoiding external API calls)[31][32], which addresses data residency concerns but requires significant in-house engineering. Others take a hybrid approach: using Chinese models for non-sensitive tasks while reserving sensitive data for models they trust more. In any case, the entrance of Chinese LLMs has injected healthy competition. It has pushed U.S. firms to innovate on efficiency (for example, Anthropic’s latest Claude is significantly cheaper and faster than its predecessor) and even to open up aspects of their tech. As one AI researcher put it, “the long-term dominance of American AI depends heavily on [not] ceding the lead in open-source to China.”[33]

Top 5 AI Models Being Integrated by Enterprises

In today’s multi-model environment, a handful of LLMs have emerged as the go-to choices for enterprise integration:

1. OpenAI GPT Series (GPT-4 and GPT-5) – OpenAI’s GPT models remain a staple in enterprise AI, known for their advanced capabilities and general reliability. GPT-4 brought breakthrough performance in natural language understanding and generation, and OpenAI’s newly announced GPT-5 is an even more significant leap – excelling across coding, math, writing, and multimodal tasks[34]. Many companies access GPT models via Azure OpenAI Service or OpenAI’s API to power chatbots, writing assistants, and decision-support tools. GPT-4/5’s strengths are complex reasoning and fluent dialog, making them ideal for applications like legal document analysis or internal helpdesk automation. However, they are proprietary and come at a premium cost. OpenAI’s enterprise market share was about 25% as of 2025[35], reflecting widespread use but also competition. OpenAI has responded to the open-source wave by releasing GPT-OSS (open-weight) models of 120B and 20B parameters for self-hosting[28], aiming to court businesses that need more control. Still, for many enterprises, GPT-4 remains the benchmark for quality – often used when accuracy is paramount.

2. Anthropic Claude – Developed by Anthropic, Claude has quickly become one of the most popular AI systems for enterprise. In fact, by late 2025 Anthropic reportedly edged out OpenAI with 32% of enterprise LLM market share[35]. Claude’s popularity stems from its design philosophy: it’s built to be helpful, honest, and harmless (aligned), and it offers very large context windows (100K+ tokens), enabling it to handle lengthy documents or multi-turn conversations with ease. The latest Claude 4 series (Opus 4, Sonnet 4.5) delivers top-tier performance on coding and reasoning tasks[36], making Claude a strong competitor to OpenAI’s models. Enterprises use Claude for things like analyzing software code, generating knowledge base articles, and as an AI assistant in tools like Slack. Claude’s balance of intelligence, speed, and lower risk of offensive outputs appeals especially in customer-facing or sensitive applications. Anthropic’s close partnership with AWS has also made Claude accessible to companies via Amazon Bedrock. Overall, Claude is valued for its long memory and reliability, and many organizations run it alongside GPT models to compare responses for quality and tone.

3. Meta’s LLaMA 2 – LLaMA 2 is the leading open-source LLM from Meta AI, released in mid-2023, and it has become a foundation for many custom enterprise AI solutions. Unlike proprietary models, LLaMA 2’s weights are available (with a permissive license for research and limited commercial use), meaning companies can fine-tune it on their own data. This model (available in sizes up to 70B parameters) demonstrated that open models can approach the power of closed models. Within months of release, LLaMA 2 spurred a wave of innovation – spawning countless fine-tuned variants and industry-specific models. It became common to see enterprises using a LLaMA 2 derivative model internally for tasks like software documentation, internal code completion, or drafting reports, especially when data privacy was a concern. Meta’s open approach also pressured other players (e.g., OpenAI) to consider open-weight offerings. While newer open models (including Chinese ones) have since surpassed LLaMA 2 on some benchmarks, it remains a popular choice for companies that need a solid base model they can fully control. In fact, until recently LLaMA-based models dominated new AI model uploads on Hugging Face, before Alibaba’s Qwen overtook it in volume of new derivatives[37] – a testament to how widely LLaMA was adopted in the AI developer community. Tech companies like IBM even partnered with Meta to offer LLaMA 2 through their platforms (IBM’s watsonx), targeting enterprise AI builders. LLaMA 2’s impact is that it opened the door for open-source LLM adoption in enterprise settings, paving the way for the newer generation of open models.

4. Alibaba Qwen – Qwen (full name Tongyi Qianwen) is Alibaba Cloud’s flagship LLM and arguably the most successful Chinese open-source model on the global stage. Alibaba released Qwen-7B and 14B under an Apache 2.0 license in 2023, and later introduced larger Mixture-of-Experts versions (up to 70B+ effective parameters in Qwen 3). Qwen models are known for their efficiency and multilingual capabilities, and Alibaba has tailored variants like Qwen-Coder (for programming) and Qwen-VL (vision-language)[38][39]. Critically, Qwen is free to use and modify, which led to massive uptake: by late 2025, Alibaba reported over 90,000 enterprises (in sectors from consumer electronics to gaming) using Qwen-family models[26]. Many are Chinese companies, but Qwen has made inroads globally thanks to its performance-cost ratio. Airbnb’s adoption of Qwen for its AI agent showcased the model’s capabilities in English customer service at scale[15]. Other startups have fine-tuned Qwen for their needs, benefiting from the robust base model without paying API fees. Qwen’s impact on enterprises is often in cost savings: companies can deploy Qwen on their own cloud for a fraction of the cost of calling an API like GPT-4. And performance-wise, Qwen 14B has been competitive with models two or three times its size in many tasks. Alibaba continues to advance Qwen (the latest Qwen-3 series uses Mixture-of-Experts to boost performance while using fewer active parameters[40]). For enterprises, Qwen offers a mature, production-ready open model backed by a tech giant – and it doubles as part of Alibaba’s cloud offerings, which some multinational companies use in regions where Alibaba Cloud has presence. As a result, Qwen has positioned itself as a top LLM choice, particularly for companies seeking an alternative to Western APIs either for cost, flexibility, or locality reasons.

5. DeepSeek – DeepSeek is a newer entrant that has quickly gained attention for its ultra-low-cost and open approach. Developed by a Chinese AI company (believed to be based in Hong Kong), DeepSeek’s model series might not be a household name yet, but among developers it’s trending as a game-changer for affordable AI. The DeepSeek V3 models are massive (hundreds of billions of parameters) and open-sourced under the MIT license[41], meaning enterprises can freely download and deploy them for commercial use[42]. What really sets DeepSeek apart is its relentless focus on optimization for cost-efficient inference. The latest DeepSeek V3.2-Exp model slashed inference costs by 50%, offering input processing at just $0.028 per million tokens via its API[43] – orders of magnitude cheaper than OpenAI’s GPT-4. Even approaching a 128k token context (hundreds of pages of text), DeepSeek maintains low cost and solid performance[43][44]. Technically, it achieves this via innovations like DeepSeek Sparse Attention (DSA) – a mechanism that selectively attends to the most relevant tokens (“lightning indexer”) instead of every token in a long sequence[45]. This dramatically reduces computation for long prompts while preserving answer quality, effectively flattening the cost curve for long-context tasks[46]. Thanks to such efficiency, DeepSeek is ideal for applications like processing lengthy legal contracts or performing multi-turn research dialogues without running up huge cloud bills. Enterprises also appreciate that DeepSeek can be self-hosted – the full model weights (e.g. the 685B-parameter V3.2) are downloadable, and the company even provides optimized inference kernels and Docker images for deployment[47][48]. In practice, some U.S. organizations are testing DeepSeek as a way to diversify from reliance on big U.S. providers – a hedge against vendor lock-in[49]. It’s viewed as a “cost-efficient alternative” for general-purpose tasks like drafting, summarization, and chat, where its slightly lower quality than frontier models is an acceptable trade-off for huge savings[50]. DeepSeek’s presence in enterprise is still emerging, but it was telling that on OpenRouter (a popular platform routing AI model API calls), DeepSeek and Qwen both cracked the top 10 most-used models by developers in late 2025 – whereas a year prior that list was dominated by U.S. models[51][52]. Going forward, if DeepSeek continues iterating quickly and keeping costs exceptionally low (the company already hinted that its API undercuts even open-source rival Meta’s Llama in long-context cases[53]), it could become a staple for budget-conscious enterprise AI deployments.

Macaron – A Consumer AI Agent Poised for Asia’s Market

While much of the AI industry’s focus has been on enterprise use cases, one company betting on consumer-facing AI is Macaron AI. Macaron, a Singapore-based startup, has introduced what it calls the “world’s first personal AI agent” – essentially an AI assistant devoted to enriching an individual’s daily life rather than just workplace productivity[54]. The Macaron AI agent is designed to act like a personalized digital companion. It can instantly turn a user’s request (even a single sentence) into a custom mini-app or solution[55][56]. For example, tell Macaron “Plan a weekend trip to Kyoto” and it will generate a tailored itinerary app; ask it to help you start a fitness habit, and it creates a personalized workout tracker – all in seconds, with no coding. This life-focused intelligence is a deliberate differentiator: Macaron isn’t just summarizing emails or writing code, it’s tackling everyday personal tasks. As the company puts it, Macaron’s specialty is “turning a single sentence into a working mini-app… life-centric focus sets it apart from AI agents that mostly help with office work.”[56]

Under the hood, Macaron is pushing technical boundaries as well. It employs a massive mixture-of-experts model (MoE) with an unprecedented 1 trillion+ parameters and a sophisticated training setup[57][58]. The team implemented innovative scaling techniques (hybrid parallelism, low-rank adaptation fine-tuning, etc.) to make such a huge model feasible[59][60]. Why so large? Because Macaron is aiming for an AI that feels truly personalized and human-like in its breadth of knowledge. It has an extensive long-term memory module that builds a personal knowledge base about the user – remembering your preferences, important events, and context from past conversations[61][62]. Over time, it learns to anticipate your needs and adapt its tone and suggestions to your unique style, almost like an ever-attentive friend. For instance, Macaron can remind you of a task you mentioned last week, or suggest a restaurant for date night based on your dietary habits and past favorites. It also enables a social dimension to AI – users can invite Macaron into group chats to assist collaboratively, and share or co-create the mini-apps it builds[63][64], turning AI into a communal experience.

Macaron’s focus on consumers and lifestyle could tap a huge opportunity in Asia. In China and other Asian markets, the big tech players have poured resources into AI, but much of that has been enterprise-oriented or infrastructure-focused (enterprise cloud services, government projects, etc.) or applied to enhancing existing super-apps and platforms. A standalone personal agent aimed purely at individual empowerment and creativity is rarer. Macaron sees this gap – positioning itself as a lifestyle AI that helps you “live better, not just work better.” Its services (travel planning, health coaching, relationship advice, hobby facilitation, journaling, etc.) play into the Asian consumer trends of digital personal assistants and “super-apps,” but with far more personalization. If Macaron can navigate local cultures and data regulations (something it explicitly is working on, by integrating local privacy norms and AI ethics considerations into its design for markets like Japan and Korea[65]), it could find enthusiastic uptake. The Chinese market, in particular, is massive – and while domestic giants have their own chatbots (like Baidu’s Ernie or Tencent’s mix of services), a nimble personal-agent solution that works across platforms might carve out a niche. Macaron’s success will hinge on trust (storing a user’s life memories is sensitive) and on delivering real value beyond what a user’s phone and apps already do. But its approach – a co-designer of personal solutions – suggests new possibilities. In an AI landscape largely split between enterprise tools and generic chatbots, Macaron is an example of a startup aiming to define a new category of consumer AI. Its progress will be a fascinating case study in how an AI designed for personal empowerment can coexist and perhaps flourish alongside the big enterprise AI initiatives.

Future Outlook: Open-Source Chinese AI and US Enterprise Strategy

What does the future hold if more U.S. companies continue to adopt open-source Chinese AI models? In the near term, we can expect heightened competition and faster innovation. U.S. AI firms will be pressed to respond to the “China price” – we may see further cost cuts for AI services and more open releases from Western companies. (OpenAI’s release of GPT-OSS models in 2025 is one such response, and Google’s Gemini might follow with tiers including cheaper, smaller versions to stay relevant.) This competition benefits enterprise buyers, who will have a richer menu of models at various price-performance points. We’re already seeing this: for example, Anthropic’s Claude 4 is offered in multiple versions (Opus, Sonnet, Haiku) to balance power vs. cost[66][36], and startups like MiniMax proudly advertise that their model costs only 8% of what Anthropic’s does for similar performance[67]. If Chinese open models keep gaining adoption, American providers might also accelerate research into efficiency techniques (like the sparse attention and MoE strategies Chinese teams use) to close the gap on throughput and cost. In fact, a cross-pollination is occurring – research ideas flow globally, so one positive outcome is the overall progress in AI capabilities could speed up as teams build on each other’s breakthroughs.

At the same time, trust and governance will be pivotal. Businesses will demand assurances about any model they use, whether from Silicon Valley or Beijing. This could give rise to third-party audit and certification of AI models for security, much as cloud data centers today undergo security audits. The U.S. government may also play a role: for example, it could issue guidelines or even restrictions for certain sectors (like defense, critical infrastructure, etc.) on using foreign-developed AI. “There are natural concerns over what data the model was trained on and whether it exhibits behaviors the company wouldn’t want,” noted Air Street Capital’s Nathan Benaich, referring to high-stakes enterprise uses of foreign models[68]. We might see the emergence of compliance-oriented AI solutions – e.g. U.S.-hosted forks of open models that are vetted for security – giving enterprises a safer way to leverage these innovations. Indeed, some organizations are already pursuing a “best of both worlds” approach: they take an open model like Qwen or Llama, remove or retrain any problematic aspects, and run it on their own secured infrastructure, thus enjoying cost benefits without sending data to an external entity[31][32].

If Chinese open-source AI continues to proliferate in U.S. enterprise, it may also alter the balance of AI expertise and talent. Open models lower the barrier to entry for building AI-driven products, which could spawn more startups and solutions – a win for innovation broadly. However, if the core technology underpinning many of those solutions comes from China, that could translate to influence. For instance, Chinese companies could start offering paid support, consulting, or premium add-ons for their open models (much like Red Hat did for Linux in the open-source software world). The U.S. tech industry might find itself in the ironic position of leveraging Chinese “open” tech widely, even as geopolitical rivalry persists. From a strategic standpoint, this trend might actually encourage more collaboration in the AI research community – if Chinese and Western labs are all building on each other’s open contributions, a shared technical foundation could emerge (with common standards or frameworks). But it could equally lead to fragmentation, where two ecosystems evolve: one dominated by fully open, low-cost models (with a strong foothold in Asia and among cost-sensitive startups globally), and another of premium, proprietary models (dominant in high-security domains and among big enterprises that prioritize top quality and support).

For U.S. companies, a key future consideration will be “vendor diversification” in AI strategy. Relying solely on one AI partner (say just OpenAI or just Alibaba) carries risks – of price changes, outages, or policy shifts. Many CIOs will prefer a portfolio: maybe a primary LLM provider plus a backup open model in-house as a contingency. Chinese models being in the mix strengthens that hand, giving enterprises additional leverage. As the VentureBeat analysis noted, DeepSeek’s open-source approach offers a hedge against lock-in – but boards and security teams will ask tough questions if the hedge comes from a Chinese vendor[69]. Those questions will likely drive a lot of discussion in boardrooms and IT architecture reviews in the coming years.

Finally, it’s important to note that the U.S. still holds some critical advantages in the AI race: access to the most advanced semiconductor hardware, a stronger pipeline of top AI research talent globally, and (for now) the most comprehensive troves of high-quality training data. As the YouTube commentary that introduced this discussion pointed out, the U.S. “just has better access to higher quality data and GPUs… naturally, the US will create better models like we always have” – while “China will continually undercut the market” on cost[70][71]. This suggests a future where U.S. companies continue pushing the frontier of raw capability, and Chinese companies focus on making AI widely accessible and affordable. In enterprise terms, the premium segment (companies that need the absolute best model and will pay for it) may remain loyal to U.S. frontier models, whereas the mass adoption segment (companies that need “good enough” AI at the lowest cost) might increasingly go with Chinese-origin open models. The OpenAI-Thrive partnership itself can be seen as a response to this dynamic: by deeply embedding AI into industry workflows and learning from real use, OpenAI hopes to maintain an edge that isn’t just about model quality but about whole-product integration and domain expertise.

In conclusion, the landscape of enterprise AI integration is being reshaped by collaborations like OpenAI’s with Thrive that bring AI into core business processes, and by the influx of capable, low-cost Chinese LLMs that broaden the options for enterprises. We are likely headed into an era of co-opetition, where American and Chinese AI ecosystems both compete and inadvertently collaborate (through open-source) to advance the state-of-the-art. For enterprises, this is generally positive: more choice, more innovation, and the ability to mix-and-match AI solutions to fit their needs. The winners in business will be those that can strategically leverage this diversity of AI models – tapping the strength of each where it fits, managing the risks, and staying agile as the technology continues to evolve at breakneck speed. In the end, whether it’s a cutting-edge OpenAI system or a free Chinese model on Hugging Face, what matters to businesses is outcomes. If an AI model can automate a task, save costs, or open a new product opportunity, it’s going to find a welcome home. And in 2025 and beyond, those homes will increasingly host a blend of East and West in their AI toolkit – a development that would have seemed far-fetched not long ago, but is now the reality of our globally intertwined tech industry.

Sources: [1][2][3][4][5][6][7][9][10][12][15][17][19][20][21][27][29][30][68][33][34][35][36][37][26][16][43][46][41][51][55][56][62][65][69][70][71]


[1] [2] [3] [4] [5] OpenAI takes stake in Thrive Holdings in latest enterprise AI push | Reuters

https://www.reuters.com/business/openai-buys-stake-thrive-holdings-push-ai-into-accounting-it-services-2025-12-01/

[6] [9] [10] [11] [12] [35] 13 LLM Adoption Statistics: Critical Data Points for Enterprise AI Implementation in 2025

https://www.typedef.ai/resources/llm-adoption-statistics

[7] [8] Companies Using Generative AI: Real Life Examples

https://indatalabs.com/blog/companies-using-generative-ai

[13] [14] [19] [20] [21] [24] [25] [27] [29] [30] [33] [37] [51] [52] [67] [68] [70] [71] Cheap and Open Source, Chinese AI Models Are Taking Off - The Wire China

https://www.thewirechina.com/2025/11/09/cheap-and-open-source-chinese-ai-models-are-taking-off/

[15] [16] [17] [18] Airbnb CEO Brian Chesky makes it clear, says: We don't use OpenAI's latest models in production because … - The Times of India

https://timesofindia.indiatimes.com/technology/tech-news/airbnb-ceo-brian-chesky-makes-it-clear-says-we-dont-use-openais-latest-models-in-production-because-/articleshow/124728422.cms

[22] [23] [26] [28] [34] [36] [38] [39] [40] [66] Top 9 Large Language Models as of November 2025 | Shakudo

https://www.shakudo.io/blog/top-9-large-language-models

[31] [32] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [53] [69] DeepSeek's new V3.2-Exp model cuts API pricing in half to less than 3 cents per 1M input tokens | VentureBeat

https://venturebeat.com/ai/deepseeks-new-v3-2-exp-model-cuts-api-pricing-in-half-to-less-than-3-cents

[54] Macaron AI, the World's First Personal Agent, Officially Launches ...

https://finance.yahoo.com/news/productivity-ai-personal-ai-macaron-120000260.html

[55] [56] [57] [58] [59] [60] [61] [62] [63] [64] The Macaron Experience - AI That Helps You Live Better

https://www.prnewswire.com/news-releases/the-macaron-experience--ai-that-helps-you-live-better-302610064.html

[65] Socio‑Technical Integration: Navigating Culture ... - Macaron AI

https://macaron.im/en/blog/socio-technical-integration-macaron-asia

Boxu earned his Bachelor's Degree at Emory University majoring Quantitative Economics. Before joining Macaron, Boxu spent most of his career in the Private Equity and Venture Capital space in the US. He is now the Chief of Staff and VP of Marketing at Macaron AI, handling finances, logistics and operations, and overseeing marketing.

Apply to become Macaron's first friends